Anyone using SpecFlow will likely have come across Context Injection and Scenario Context for storing data across different binding classes. (For more detail see: https://specflow.org/documentation/Sharing-Data-between-Bindings/)
As a Developer, the Scenario Context just seems very brittle compared with the Context Injection. You use strings to save and retrieve data and it is basically a global variable system, which to me normally seems wrong. The dependency injection, on the other hand, works nicely with different classes being able to be created to store different types of data.
Can anyone see a reason why you would want to use Scenario Context over Context Injection? I cannot think of any but maybe I am missing something?
On the difference: In addition to that context injection gives you better static type safety. (I have written a post about the decentralized architecture model you can build up with context injection: http://gasparnagy.com/2017/02/specflow-tips-baseclass-or-context-injection/)
Why Scenario Context? First of all, this feature existed before context injection and has been used in many tutorials, etc. So it is somewhat better known.
Also scenario context is somewhat an easier programming concept. You just get and set some global variable. For context injection, you have to understand constructors, instance fields and local variables. I think these are important things to learn anyway, but might be too much at once (without help).
Scenario context might be also useful is when you write generic SpecFlow plugins and you don't want to be dependent on the fine details of the dependency management system of the concrete project, but this is a pretty special case anyway.
Scenario context is using strings. Which means you can pass the string as a parameter of your test. You could write a generic test method to store something in a Scenario Context variable.
Eg :
public void GenericSaveIntTestMethod(string variableName, int intToSave){
ScenarioContext.Current[variableName] = intToSave;
}
You could reuse this. Saving different int's with different names.
I'm not sure if this is good or bad practice, but I've seen it used in the framework I'm working in.
I'm not sure if there is a way to do this with Context Injection.
Related
I would like to inject NSManagedObject subclasses with the Typhoon framework. I have not seen an example of that, but I am thinking it might be possible.
I am using MO Generator, and have a superclass in between the NSManagedObject and the ultimate child class.It's in this common abstract base class where I would want to inject if that makes any difference.
Has anyone had any success with this?
Any advice would be appreciated. Please let me know if there is more information that I can provide.
Setting up Core Data with Typhoon:
Firstly, if you want to set up Core Data with Typhoon, so that you could inject, for example, data sources into your view controllers, there is a Typhoon+CoreData+RAC sample application that was kindly posted by Ryoichi Izumita. It shows:
Typhoon's UIStoryboard integration
Core Data
Reactive Cocoa
In this sample . . .
The top-level assembly is CDRApplicationAssembly
The AppDelegate is injected at startup with some Core Data components. This allows the app delegate to save the context when the application is terminated.
The's a CDRViewController, which is declared on the main story board. Because we've boot-strapped Typhoon from the app's plist file all storyboards will be an instance of TyphoonStoryboard. These work just like regular storyboards with the added benefit that dependencies are injected according to the rules outline in our assembly. This controller is injected with a Core Data datasource.
Ryoichi-san created a category on NSManagedObjectContext, making it easier to set up with DI and integrate Reactive Cocoa.
Core Data Assembly:
The main assembly refers to a helper assembly - CDRCoreDataComponents, which is responsible for setting up core data. Some of the values in this file are loaded from a configuration file, which makes it easy to set up eg production vs test environments.
Now to address your question specifically. . .
Injecting model classes themselves:
Often persistent domain objects tend to have properties without methods, and many argue that this should not be the case (its called by Martin Fowler and others 'the anemic domain object anti-pattern). They argue that in a correct object-oriented design, model objects will have behaviors as well as properties, and that the ideal place for behaviors is close to the data that they represent.
The problem is that:
In order for domain objects to have behaviors, they must often rely on collaborators.
But of course, if the objects seek out their own dependencies, we'd have another anti-pattern. DI is required.
The 'hook-point' approach (supported by Typhoon):
We can instruct Typhoon to inject a pre-obtained instance as follows:
Knight* knight = ... //Loaded from persistent storage
[componentFactory inject:knight]; //Matches by type
[componentFactory inject:knight withDefinition:#selector(selectorInAssembly)];
This is the 'hook-point' approach. After obtaining an instance we tell Typhoon to inject it. First we inject the TyphoonComponentFactory itself into our Data Access Object, network client or whatever will be emitting the object. As the last step, we tell Typhoon to inject our model, according to the recipe defined in the assembly. Et voila!
A custom core data integration (not supported by Typhoon):
Instead of using this 'hook-point' approach, perhaps we could provide a tighter integration with Core Data (as we've done with UIStoryBoard), so that the above step is not necessary? This is not currently supported by Typhoon.
Using AOP to Inject Domain Objects: (not supported by Typhoon)
In fact, besides a specific solution for Core Data, there's another approach to inject any domain object using "AOP". By that we mean intercepting and instrumenting all of a domain object's init methods to subsequently load dependencies according to the rules in an Assembly. This is how the #Configurable annotation in Spring (a popular DI+AOP framework for Java) works. The problem is finding a suitable way to associate the model object with a TyphoonComponentFactory, without this being too invasive (ie a singleton). There's some drawbacks to saying "every instance of this Product, Car and Holiday are associated with this assembly", which is why we've favored the "hook-point' approach so far.
Your Feedback:
If you're happy with the hook-point approach then great. If you're interested in either a specific core data integration or the "AOP" solution described above, then we'd enjoy exploring it with you. There's been a few discussions already.
I am currently building an ASP.net MVC application, which has be broken down into multiple modules (as well as a generic class library).
I have implemented a Unit Of Work pattern for my first module. This unit of work class contains a number of different repositories.
However, I was wondering whether or not it is good idea to have a separate Unit Of Work class for each module?
Well, EF supplies you with UnitOfWork and Repository patterns implemented itself. Usually they are not exactly what you want and it seem nice to add some methods to that native EF Repositories, but in most cases it doesn`t worth the trouble.
Implementing your own Repository based on EF is not a good idea if your project is simple. It adds a lot of work but not as much of value.
Implementing UnitOfWork based on EF is complete different story. The only reason i can see to do it is "to have different UoW for different parts of the solution". Avoid it otherwise, really.
We tried to add both this approaches ignoring prebuilt ones in our project. It was completely reasonable because we were designing modular solution and we didn`t even know how many modules we would have at the end. We expected to add new modules to the system when it is already running and heavy loaded. And i can say that it took a lot of time to develop such application. When you realize that you need to have access to one more entity from some module leads to changes in several places - the first evidence of inefficient design.
So, KISS and YAGNI are against it. If you are tangled by question "should i add this stuff to my project" - just don`t. You need a good reason to implement this parts yourself, not just some "nice design" bias, because it adds lots of complexity. Even if you think you would need it some day - wait until that day. If you would try to estimate which miscalculation would be more disastrous i am pretty sure that it is much easier to add something new to your project then remove something already existing.
Please see this and this
A unit of work is really just a way of keeping of track of a set of entities that have been loaded into memory. Once loaded, we can work with the entities in the normal way: changing state, adding new entities and removing other entities. When we are ready to save our changes we ask the unit of work to commit and it takes care of “flushing” the pending changes to the underlying database.
Is it a good idea to have a separate Unit Of Work class for each module?
My first thought is: how would a unit of work for one module differ from that of another? If they do, they probably shouldn't, because the domain should be persistence ignorant and the data layer should be business logic ignorant.
Take for instance the UoW that comes with Entity Framework itself: the context. [When you create a context, do stuff, call SaveChanges() and dispose of it, it acts as a UoW]. You can use one context class maybe for your whole application. You're not going to program any business logic in your context class. So there is no reason to have a context class per module unless each module uses really distinct parts of the database (which is hardly ever true). The same will hold for a UoW you create yourself.
It's a bit beyond the scope of your question, but you could ask yourself whether you need your own UoW and repository classes as EF offers basic implementations of both (context and DbSets).
I am trying to figure out how to create multiple objects when using Dependency Injection. As far as I understand the standard approach is to inject a Factory which is then used to create the objects. The part I struggle with is how the Factory creates the objects. So far I see two possible solutions:
The Factory just uses new() to create the object.
Isn't DI supposed to free me of the use of new for non value objects?
What happens if the Object to be created has dependencies that could be resolved by the IoC?
Use the Container as Serviclocator
solves the problems of just newing objects at the cost of introducing an antipattern or is it no longer an antipattern if the use of the serviclocater is constraind within the factories?
It feels like i can coose between a bad and a bad solution. Is there something I am missing or do I understand somthing wrong here?
Edit Currently I am not using an Ioc at all but thinking about Ninject. Although the Autofac DelegateFactories sound very promising.
For starters, I don't consider using a container as service locator in factories an anti-pattern. There are genuine circumstances where it is entirely appropriate. Come to think about it, container aware factories are really container extensions, and those seem to be excluded from service locator bashing. Even the most pure IoC frameworks like AutoFac or Ninject have extensive extension capabilities. A most typical use case for this pattern is resolving to different implementations based on where the service is used.
With regards to using new to create instances inside factories, that is acceptable as well. The IoC/DI message got a bit distorted there and never using new is really a side effect, rather than the goal of DI. The first imperative of Dependency Injection is to externalise creation of dependencies from the component. A factory satisfies that imperative as long as it itself gets injected into component. The questions you need to ask yourself when evaluating such scenarios are:
Does the component itself create its dependencies? A: No, the factory does.
Can you make the component work with different dependencies without modifying it? A: Yes, by injecting a different factory.
I said this before, IoC containers are just factories on steroids. For 80% use case they work out of the box. The other 20% might require tweaks of the above two varieties. I tend to use container aware factories when I want to create components that require both registered dependencies and some input at run-time and new-ing factories when I create Domain objects that don't have dependencies on other services, but take all their construction parameters at run time.
Although the interface for your factory will be defined at the application level, you would typically define the implementation of that factory class close to your DI configuration, thus as part of your composition root. Although calling the container directly from your code is an implementation of the Service Locator anti-pattern, any code that is defined inside the compostion root is merely mechanics and is therefore not Service Locator. As long as newing up objects or calling into the container is done inside (or very close to) the composition root, this is not a problem, because the application will still be clean from any locator / container.
In other words: use the factory approach. Whether or not you need to new up objects directly inside your factory or make use of the container, depends on the objects. Letting the container create the objects is preferable, especially when they got dependencies on their own, but not all objects can be created by the container. In that case you need to revert to the new operation. Both are fine when the code is part of the composition root and not of the application. The factory itself can have dependencies of its own. This should not be a problem. You can let the container wire-up the factory instance.
I'm writing my second real life application, which uses DI. Overall I think it have let to a better design. But there are some code smells, that I don't know how to solve.
I prefer to use constructor injection and have often observed that I need about 5 or more objects to be injected in the constructor. It seems to be too many, maybe it's a design problem, not getting the SRP right. But I think that my use of DI also is to be blamed.
I'm looking for "best practices" or "rule of thumb", in general I seem to inject everything, that isn't in the .Net framework, is that overdoing it?
To get things started, here are two examples of objects that I inject, but am uncertain about.
Objects that are true singletons like application configuration or those small util classes, do you inject them?
They seems to be injected very often, the only reason to inject them seems to be allow to change the value for testing, but Ayende seems to have solved the problem in another way: http://ayende.com/Blog/archive/2008/07/07/Dealing-with-time-in-tests.aspx.
Common objects such as logging, that are used in almost every object, should they be injected?
The rule of thumb I often use is that I inject things that are in the way of properly writing unit tests. When doing this you will sometimes end up abstracting away BCL classes (such as DateTime.Now, File, etc), and sometimes your own stuff. Good things to inject are services (such as ICustomerService, ICustomerUnitOfWorkFactory, or ICustomerRepository). Don't inject things like entities, DTOs and messages.
There are other reasons for injecting objects however, such as to be able to replace modules at a later time (for instance switch implementations for validation, UI, or O/RM), to allow parallel development within or across teams, and to lower maintenance.
I prefer to use constructor injection
and have often observed that I need
about 5 or more objects to be injected
in the constructor.
As you already noted yourself, having many dependencies could be caused by not adhering to the SRP. What you can do however, is grouping common dependencies with there logic into an aggregate service and inject that into consumers. Also see Mark Seemann's article about Aggregate Services.
Objects that are true singletons like
application configuration or those
small util classes, do you inject
them?
I am personally not a fan of the way Ayende proposes it. This is an Ambient Context, which is a specific sort of service locator construct. Doing this hides the dependency, because classes can call that static class without you having to inject it. Explicitly injecting it makes it much clearer that you need to unit test time. Besides that, it makes it hard to write tests for frameworks such as MSTest who tend to run tests in parallel. Without any countermeasures, it makes your tests very unreliable. A better solution -for the DateTime.Now example- is to have an IClock interface, as is suggested here. As you can see, that answer scores much higher than Ayende approach, that is shown in the same SO question.
Common objects such as logging, that
are used in almost every object,
should they be injected?
I inject them in my code, because that makes the dependencies clear. Note however, that in my code I still hardly ever have to inject a logger. Think hard about every line you want to log and it isn't really a failure (or a cross-cutting concern that should be placed elsewhere). I usually throw an exception when something happened that I didn't expect. It allows me to find bugs fast. Or to put it in other words: Don't filter, but fail fast. And please ask yourself: "Do I log too much?"
I hope this helps.
My personal rule of thumb is this:
inject it if you want it to be immutable
inject it if you want to be able to substitute it for testing purposes
Things like services can meet both of those criteria - the consumer should never change it, and you want to be able to substitute it come testing time. With the immutable items, you would still possibly have a property on the consuming object, but that property would only have a getter, not a setter. If you wanted to change the value you have to create a new instance of the object.
Should loggers be injected?
There is no reason to. Loggers are typically exposed via a static class, and are new'ed up from configuration entries, so even for testing purposes there is no need to inject them.
Should true singletons like application configuration be injected?
Once again, it is a globally accessible object which is easily modified for test purposes, so no need to inject. The only time i would inject this is if the consumer was 'disconnected'; i.e. created via reflection or called as a web service or remote object.
While DI is a nice pattern, too much of a good thing can still be unhealthy. If you sense a growing code smell, then examine each item you are injecting and ask yourself the question: Do i NEED to inject this parameter?
A good starting point is to inject Volatile Dependencies.
You may want to also inject Stable Dependencies for further loose coupling, but if you need to prioritize, Volatile Dependencies is the best place to start.
Concerning constructor over-injection, it's really just a symptom of breaking SRP: see this related question: How to avoid Dependency Injection constructor madness?
I've had a certain feeling these last couple of days that dependency-injection should really be called "I can't make up my mind"-pattern. I know this might sound silly, but really it's about the reasoning behind why I should use Dependency Injection (DI). Often it is said that I should use DI, to achieve a higher level of loose-coupling, and I get that part. But really... how often do I change my database, once my choice has fallen on MS SQL or MySQL .. Very rarely right?
Does anyone have some very compelling reasons why DI is the way to go?
Two words, unit testing.
One of the most compelling reasons for DI is to allow easier unit testing without having to hit a database and worry about setting up 'test' data.
DI is very useful for decoupling your system. If all you're using it for is to decouple the database implementation from the rest of your application, then either your application is pretty simple or you need to do a lot more analysis on the problem domain and discover what components within your problem domain are the most likely to change and the components within your system that have a large amount of coupling.
DI is most useful when you're aiming for code reuse, versatility and robustness to changes in your problem domain.
How relevant it is to your project depends upon the expected lifespan of your code. Depending on the type of work you're doing zero reuse from one project to the next for the majority of code you're writing might actually be quite acceptable.
An example for use the use of DI is in creating an application that can be deployed for several clients using DI to inject customisations for the client, which could also be described as the GOF Strategy pattern. Many of the GOF patterns can be facilitated with the use of a DI framework.
DI is more relevant to Enterprise application development in which you have a large amount of code, complicated business requirements and an expectation (or hope) that the system will be maintained for many years or decades.
Even if you don't change the structure of your program during development phases you will find out you need to access several subsystems from different parts of your program. With DI each of your classes just needs to ask for services and you're free of having to provide all the wiring manually.
This really helps me on concentrating on the interaction of things in the software design and not on "who needs to carry what around because someone else needs it later".
Additionally it also just saves a LOT of work writing boilerplate code. Do I need a singleton? I just configure a class to be one. Can I test with such a "singleton"? Yes, I still can (since I just CONFIGURED it to exist only once, but the test can instantiate an alternative implementation).
But, by the way before I was using DI I didn't really understand its worth, but trying it was a real eye-opener to me: My designs are a lot more object-oriented as they have been before.
By the way, with the current application I DON'T unit-test (bad, bad me) but I STILL couldn't live with DI anymore. It is so much easier moving things around and keeping classes small and simple.
While I semi-agree with you with the DB example, one of the large things that I found helpful to use DI is to help me test the layer I build on top of the database.
Here's an example...
You have your database.
You have your code that accesses the database and returns objects
You have business domain objects that take the previous item's objects and do some logic with them.
If you merge the data access with your business domain logic, your domain objects can become difficult to test. DI allows you to inject your own data access objects into your domain so that you don't depend on the database for testing or possibly demonstrations (ran a demo where some data was pulled in from xml instead of a database).
Abstracting 3rd party components and frameworks like this would also help you.
Aside from the testing example, there's a few places where DI can be used through a Design by Contract approach. You may find it appropriate to create a processing engine of sorts that calls methods of the objects you're injecting into it. While it may not truly "process it" it runs the methods that have different implementation in each object you provide.
I saw an example of this where the every business domain object had a "Save" function that the was called after it was injected into the processor. The processor modified the component with configuration information and Save handled the object's primary state. In essence, DI supplemented the polymorphic method implementation of the objects that conformed to the Interface.
Dependency Injection gives you the ability to test specific units of code in isolation.
Say I have a class Foo for example that takes an instance of a class Bar in its constructor. One of the methods on Foo might check that a Property value of Bar is one which allows some other processing of Bar to take place.
public class Foo
{
private Bar _bar;
public Foo(Bar bar)
{
_bar = bar;
}
public bool IsPropertyOfBarValid()
{
return _bar.SomeProperty == PropertyEnum.ValidProperty;
}
}
Now let's say that Bar is instantiated and it's Properties are set to data from some datasource in it's constructor. How might I go about testing the IsPropertyOfBarValid() method of Foo (ignoring the fact that this is an incredibly simple example)? Well, Foo is dependent on the instance of Bar passed in to the constructor, which in turn is dependent on the data from the datasource that it's properties are set to. What we would like to do is have some way of isolating Foo from the resources it depends upon so that we can test it in isolation
This is where Dependency Injection comes in. What we want is to have some way of faking an instance of Bar passed to Foo such that we can control the properties set on this fake Bar and achieve what we set out to do, test that the implementation of IsPropertyOfBarValid() does what we expect it to do, i.e. return true when Bar.SomeProperty == PropertyEnum.ValidProperty and false for any other value.
There are two types of fake object, Mocks and Stubs. Stubs provide input for the application under test so that the test can be performed on something else. Mocks on the other hand provide input to the test to decide on pass\fail.
Martin Fowler has a great article on the difference between Mocks and Stubs
I think that DI is worth using when you have many services/components whose implementations must be selected at runtime based on external configuration. (Note that such configuration can take the form of an XML file or a combination of code annotations and separate classes; choose what is more convenient.)
Otherwise, I would simply use a ServiceLocator, which is much "lighter" and easier to understand than a whole DI framework.
For unit testing, I prefer to use a mocking API that can mock objects on demand, instead of requiring them to be "injected" into the tested unit from a test. For Java, one such library is my own, JMockit.
Aside from loose coupling, testing of any type is achieved with much greater ease thanks to DI. You can put replace an existing dependency of a class under test with a mock, a dummy or even another version. If a class is created with its dependencies directly instantiated it can often be difficult or even impossible to "stub" them out if required.
I just understood tonight.
For me, dependancy injection is a method for instantiate objects which require a lot of parameters to work in a specific context.
When should you use dependancy injection?
You can use dependancy injection if you instanciate in a static way an object. For example, if you use a class which can convert objects into XML file or JSON file and if you need only the XML file. You will have to instanciate the object and configure a lot of thing if you don't use dependancy injection.
When should you not use depandancy injection?
If an object is instanciated with request parameters (after a submission form), you should not use depandancy injection because the object is not instanciated in a static way.