How often do you use IoC for controllers/DAL in real projects?
IoC allows to abstract application from concrete implementation with additional layer of interfaces that should be implemented. But how often concrete implementation changes? Should we really have to do job twice adding method to interface then the implementation if implementation hardly will ever be changed? I took part in about 10 asp.net projects and DAL (ORM-like and not) was never rewritten completely.
Watching lots of videos I clearly understand that IoC "is cool" and the really nice way to program, but does it really needed?
Added a bit later:
Yes, IoC allows prepare better testing environment, but we also have nice way to test DAL without IoC. We wrap DAL calls to database into uncommited transactions without risk to make data unstable.
IoC isn't a pattern only for writing modular programs; it also allows for easier testing, by being able to swap in mock objects that implement the same interface as the components they stand in for.
Plus, it actually makes code much easier to maintain down the road.
It's not IOC that allows you to abstract application from concrete implementation with additional layer of interfaces, this is how you should design your application in order to be more modular and reusable. Another important benefit is that once you've designed your application this way it will be much easier to test the different parts in isolation without depending on concrete database access for example.
There's much more about IoC except ability to change implementation:
testing
explicit dependencies - not hidden inside private DataContext
automatic instantiation - you declare in constructor that you need something, and you get it - with all deep nested dependencies resolved
separation of assemblies - take a look at S#arp Architecture to see how IoC allows to avoid referencing NHibernate and other specific assemblies, which otherwise you'll have to reference
management of lifetime - ability to specify per request / singleton / transitive lifetime of objects and change it in one place, instead of in dozens of controllers
ability to do dynamic stuff, like, getting correct data context in model binders, because with IoC you now have metadata about your dependencies; and this shows that maybe IoC does to your object dependencies what reflection does to C# programming - a lot of new possibilities, that you never even thought about
And so on, I'm sure I missed a lot of positive stuff. While the only "bad" thing that I can think about (and that you mentioned) is duplication of interface, which is non-issue with modern IDEs support for refactoring.
Well, if your data interfaces change every day, and you have hundreds of them - you may want to avoid IoC.
But, do you avoid good design practices just because it's harder to follow them? Do you copy and paste code instead of extracting a method/class, just because it takes more time and more code to do so? Do you place business logic in views just because it's harder to create view models and sync them with domain models? If yes, then you can avoid IoC, no problem.
You're arguing that using IOC takes MORE code than not using it. I disagree.
Here is the entire DAL IOC configuration for one of my projects using LinqToSql. The ContextProvider class is simply a thread safe LinqToSql context factory.
container.Register(Component.For<IContextProvider<LSDataContext>, IContextProvider>().LifeStyle.PerWebRequest.ImplementedBy<ContextProvider<LSDataContext>>();
container.Register(Component.For<IContextProvider<WorkSheetDataContext>, IContextProvider>().LifeStyle.PerWebRequest.ImplementedBy<ContextProvider<WorkSheetDataContext>>();
container.Register(Component.For<IContextProvider<OffersReadContext>, IContextProvider>().LifeStyle.PerWebRequest.ImplementedBy<ContextProvider<OffersReadContext>>();
Here is the entire DAL configuration for one of my projects using NHibernate and the repository pattern:
container.Register(Component.For<NHSessionBuilder>().LifeStyle.Singleton);
container.Register(Component.For(typeof(IRepository<>)).ImplementedBy(typeof(NHRepositoryBase<>)));
Here is how I consume the DAL in my BLL (w/ dependency injection):
public class ClientService
{
private readonly IRepository<Client> _Clients;
public ClientService(IRepository<Client> clients)
{
_Clients = clients;
}
public IEnumerable<Client> GetClientsWithGoodCredit()
{
return _Clients.Where(c => c.HasGoodCredit);
}
}
Note that my IRepository<> interface inherits IQueryable<> so this code is very trivial!
Here's how I can test my BLL without connecting to a DB:
public void GetClientsWithGoodCredit_ReturnsClientWithGoodCredit()
{
var clientWithGoodCredit = new Client() {HasGoodCredit = true};
var clientWithBadCredit = new Client() {HasGoodCredit = false};
var clients = new List<Client>() { clientWithGoodCredit, clientWithBadCredit }.ToTestRepository();
var service = new ClientService(clients);
var clientsWithGoodCredit = service.GetClientsWithGoodCredit();
Assert(clientsWithGoodCredit.Count() == 1);
Assert(clientsWithGoodCredit.First() == clientWithGoodCredit);
}
ToTestRepository() is an extension method that returns a fake IRepository<> that uses an in-memory list.
There is no possible way you can argue that this is more complicated than newing up your DAL all over your BLL.
The only way you could have ever written the above test is by connecting to a DB, saving some test clients, and then querying. I guarantee that takes 100+ times longer to execute than this did. (Times that by 1000 tests and you can go get some coffee while you're waiting.)
Also, by using uncommitted transactions for testing you introduce debugging nightmares resulting from ORMs that don't query over uncommitted entities.
Related
I have this legacy ASP.NET webapp that's mostly been transitioned to MVC. Code testability is generally bad, as it happens a lot to legacy apps: responsibilities are in a jumble, tight coupling prevails, that sort of thing.
I've been refactoring the app for a couple weeks to improve testability, doing my best to abide by SOLID, and I've come to this point where I need to pass HttpContext or HttpResponse and the like to a method in the HttpModule. Now, obviously the HttpModules can grab a copy of the HttpApplication and just pass that everywhere - which is how it is atm - but I would rather not grant access to such a large root object to a dependency.
Because some of our IHttpModules are initialized before the MVC even comes into play, I've had problems with using an IoC container to inject HttpContextBase or HttpResponseBase into constructors of various helpers that the modules use. I thus have to either pass the HttpContextBase to methods of said helpers - which is taxing to use and feels somewhat wrong - or create something like this:
public class HttpContextProvider : IHttpContextProvider {
// ... ctor
public HttpContextBase GetContext() { return HttpContext.Current; }
}
I would then inject an instance implementing IHttpContextProvider into a helper and call .GetContext() on it later as needed.
Intuitively, I feel like I might be doing something wrong, but cannot really get a handle on what it could be. This whole refactoring is complex enough - and the bus factor in the team is low enough - that I'm hesitant to request too much of the lead dev's time for this issue, which could almost be seen as academic from a practical standpoint.
I can suggest different options
try to register this objects in IoC, but to fix the problem (that class is initialized before it's dependences can be resolved) just use Lazy< T>, most IoC support them, check if yours is
Introducing providers as in your example, but I'll suggest it to return not HttpContext (or HttpResponse), but exactly what you need from this classes, it would be easier in tests to mock this behaviour (you maybe even don't need to construct context there at all) and also it will hide all complex stuff of HttpContext that isn't used
The second option will work only if you use this objects in similar way each time - so that you can replace it with 2-3 methods of ISmthProvider. If you use objects always differently maybe it will be better to combine both approaches and extract the most popular usages into some service and in other places path Lazy< HttpContextBase> to constructor
In my opinion it's better not to pass HttpContext/HttpResponse directly as it giving access for too much stuff and later people can make a mess of it (using it without thinking as it's already there) as a result it would be difficult to test and to make refactoring later.
I believe I understand the basic concepts of DI / IoC containers having written a couple of applications using them and reading a lot of stack overflow answers as well as Mark Seeman's book. There are still some cases that I have trouble with, especially when it comes to integrating DI container to a large existing architecture where DI principle hasn't been really used (think big ball of mud).
I know the ideal scenario is to have a single composition root / object graph per operation but in a legacy system this might not be possible without major refactoring (only the new and some select refactored old parts of the code could have dependencies injected through constructor and the rest of the system using the container as a service locator to interact with the new parts). This effectively means that a stack trace deep within an operation might include several object graphs with calls being made back and forth between new subsystems (single object graph until exiting into an old segment) and traditional subsystems (service locator call at some point to code under DI container).
With the (potentially faulty, I might be overthinking this or be completely wrong in assuming this kind of hybrid architecture is a good idea) assumptions out of the way, here's the actual problem:
Let's say we have a thread pool executing scheduled jobs of various types defined in database (or any external place). Each separate type of scheduled job is implemented as a class inheriting a common base class. When the job is started, it gets fed the information about which targets it should write its log messages to and the configuration it should use. The configuration could probably be handled by just passing the values as method parameters to whatever class needs them but if the job implementation gets larger than say 10-20 classes, it doesn't seem very handy.
Logging is the larger problem. Subsystems the job calls probably also need to write things to the log and usually in examples this is done by just requesting instance of ILog in the constructor. But how does that work in this case when we don't know the details / implementation until runtime? Since:
Due to (non DI container controlled) legacy system segments in the call chain (-> there potentially being multiple separate object graphs), child container cannot be used to inject the custom logger for specific sub-scope
Manual property injection would basically require the complete call chain (including all legacy subsystems) to be updated
A simplified example to help better perceive the problem:
Class JobXImplementation : JobBase {
// through constructor injection
ILoggerFactory _loggerFactory;
JobXExtraLogic _jobXExtras;
public void Run(JobConfig configurationFromDatabase)
{
ILog log = _loggerFactory.Create(configurationFromDatabase.targets);
// if there were no legacy parts in the call chain, I would register log as instance to a child container and Resolve next part of the call chain and everyone requesting ILog would get the correct logging targets
// do stuff
_jobXExtras.DoStuff(configurationFromDatabase, log);
}
}
Class JobXExtraLogic {
public void DoStuff(JobConfig configurationFromDatabase, ILog log) {
// call to legacy sub-system
var old = new OldClass(log, configurationFromDatabase.SomeRandomSetting);
old.DoOldStuff();
}
}
Class OldClass {
public void DoOldStuff() {
// moar stuff
var old = new AnotherOldClass();
old.DoMoreOldStuff();
}
}
Class AnotherOldClass {
public void DoMoreOldStuff() {
// call to a new subsystem
var newSystemEntryPoint = DIContainerAsServiceLocator.Resolve<INewSubsystemEntryPoint>();
newSystemEntryPoint.DoNewStuff();
}
}
Class NewSubsystemEntryPoint : INewSubsystemEntryPoint {
public void DoNewStuff() {
// want to log something...
}
}
I'm sure you get the picture by this point.
Instantiating old classes through DI is a non-starter since many of them use (often multiple) constructors to inject values instead of dependencies and would have to be refactored one by one. The caller basically implicitly controls the lifetime of the object and this is assumed in the implementations (the way they handle internal object state).
What are my options? What other kinds of problems could you possibly see in a situation like this? Is trying to only use constructor injection in this kind of environment even feasible?
Great question. In general, I would say that an IoC container loses a lot of its effectiveness when only a portion of the code is DI-friendly.
Books like Working Effectively with Legacy Code and Dependency Injection in .NET both talk about ways to tease apart objects and classes to make DI viable in code bases like the one you described.
Getting the system under test would be my first priority. I'd pick a functional area to start with, one with few dependencies on other functional areas.
I don't see a problem with moving beyond constructor injection to setter injection where it makes sense, and it might offer you a stepping stone to constructor injection. Adding a property is usually less invasive than changing an object's constructor.
I understand the concept behind DI, but I'm just learning what different IoC containers can do. It seems that most people advocate using IoC containers to wire up stateless services, but what about using them for stateful objects like entities?
Whether it's right or wrong, I normally stuff my entities with behavior, even if that behavior requires an outside class. Example:
public class Order : IOrder
{
private string _ShipAddress;
private IShipQuoter _ShipQuoter;
public Order(IOrderData OrderData, IShipQuoter ShipQuoter)
{
// OrderData comes from a repository and has the data needed
// to construct order
_ShipAddress = OrderData.ShipAddress; // etc.
_ShipQuoter = ShipQuoter;
}
private decimal GetShippingRate()
{
return _ShipQuoter.GetRate(this);
}
}
As you can see, the dependencies are Constructor Injected. Now for a couple of questions.
Is it considered bad practice to have your entities depend on outside classes such as the ShipQuoter? Eliminating these dependencies seems to lead me towards an anemic domain, if I understand the definition correctly.
Is it bad practice to use an IoC container to resolve these dependencies and construct an entity when needed? Is it possible to do this?
Thanks for any insight.
The first question is the most difficult to answer. Is it bad practice to have Entities depend on outside classes? It's certainly not the most common thing to do.
If, for example, you inject a Repository into your Entities you effectively have an implementation of the Active Record pattern. Some people like this pattern for the convenience it provides, while others (like me) consider it a code smell or anti-pattern because it violates the Single Responsibility Principle (SRP).
You could argue that injecting other dependencies into Entities would pull you in the same direction (away from SRP). On the other hand you are certainly correct that if you don't do this, the pull is towards an Anemic Domain Model.
I struggled with all of this for a long time until I came across Greg Young's (abandonded) paper on DDDD where he explains why the stereotypical n-tier/n-layer architecture will always be CRUDy (and thus rather anemic).
Moving our focus to modeling Domain objects as Commands and Events instead of Nouns seems to enable us to build a proper object-oriented domain model.
The second question is easier to answer. You can always use an Abstract Factory to create instances at run-time. With Castle Windsor you can even use the Typed Factory Facility, relieving you of the burden of implementing the factories manually.
I know this is an old post but wanted to add. The domain entity should not persist itself even if you pass in an abstracted repository in ctor. The reason I am suggestion this is not merely that it violates SRP, it also contrary to DDD's aggregation. Let me explain, DDD is suited for complex apps with inherently deep graphs, therefore, we use aggregate or composite roots to persist changes to the underlying "children", so when we inject persistence into the individual children we violate the relationship children have to the composite or aggregate root that should be "in charge" of the life cycle or aggregation. Of course the composite root or aggregate does not persist it's own graph either. Another is with injecting dependencies of DDD objects is that an injected domain object effectively has no state until some other event takes place to hydrate its state. ANy consumer of the code will be forced to init or setup the domain object first before they can invoke business behavior which violates encapsulation.
I am bit curious as to what experience other developers have of applying the Repository pattern when programming in ASP.NET MVC with Entity Framework or NHibernate. It seems to me that this pattern is already implemented in the ORM themselves. DbContext and DbSet<T> in the Entity Framework and by the ISession in NHibernate. Most of the concerns mentioned in the Repository pattern - as catalogued in POEE and DDD - are pretty adequately implemented by these ORMs. Namely these concerns are,
Persistence
OO View of the data
Data Access Logic Abstraction
Query Access Logic
In addition, most of the implemententations of the repository pattern that I have seen follow this implementation pattern - assuming that we are developing a blog application.
NHibernate implementation:
public class PostRepository : IPostRepository
{
private ISession _session;
public PostRepository(ISession session)
{
_session = session;
}
public void Add(Post post)
{
_session.Save(post);
}
// other crud methods.
}
Entity Framework:
public class PostRepository : IPostRepository
{
private DbContext _session;
public PostRepository(DbContext session)
{
_session = session;
}
public void Add(Post post)
{
_session.Posts.Add(post);
-session.SaveChanges();
}
// other crud methods.
}
It seems to me that when we are using ORMs - such as Nhibernate or Entity Framework - creating these repository implementation are redundant. Furthermore since these pattern implementations does no more than what is already there in the ORMS, these act more as noise than helpful OO abstractions. It seems using the repository pattern in the situation mentioned above is nothing more than developer self aggrandizement and more pomp and ceremony without any realizable techical benefits. What are your thoughts ??
The answer is no if you do not need to be able to switch ORM or be able to test any class that has a dependency to your ORM/database.
If you want to be able to switch ORM or be able to easily test your classes which uses the database layer: Yes you need a repository (with an interface specification).
You can also switch to a memory repository (which I do in my unit tests), a XML file or whatever if you use repository pattern.
Update
The problem with most repository pattern implementations which you can find by Googling is that they don't work very well in production. They lack options to limit the result (paging) and ordering the result which is kind of amazing.
Repository pattern comes to it's glory when it's combined with a UnitOfWork implementation and has support for the Specification pattern.
If you find one having all of that, let me know :) (I do have my own, exception for a well working specification part)
Update 2
Repository is so much more than just accessing the database in a abstracted way such as can be done by ORM's. A normal Repository implementation should handle all aggregate entities (for instance Order and OrderLine). Bu handling them in the same repository class you can always make sure that those are built correctly.
But hey you say: That's done automatically for me by the ORM. Well, yes and no. If you create a website, you most likely want to edit only one order line. Do you fetch the complete order, loop through it to find the order, and then add it to the view?
By doing so you introduce logic to your controller that do not belong there. How do you do it when a webservice want's the same thing? Duplicate your code?
By using a ORM it's quite easy to fetch any entity from anywhere myOrm.Fetch<User>(user => user.Id == 1) modify it and then save it. This can be quite handy, but also add code smells since you duplicate code and have no control over how the objects are created, if they got a valid state or correct associations.
The next thing that comes to mind is that you might want to be able to subscribe on events like Created, Updated and Deleted in a centralized way. That's easy if you have a repository.
For me an ORM provides a way to map classes to tables and nothing more. I still like to wrap them in repositories to have control over them and get a single point of modification.
I think it make sense only if you want to decrease level of dependency. In the abstract you can have IPostRepository in your infrastructure package and several independent implementations of this interface built on top of EF or NH, or something else. It useful for TDD.
In practice NH session (and EF context) implements something like the "Unit of Work" pattern. Furthermore with NH and the Repository pattern you can get a lot of bugs and architectural issues.
For example, NH entity can be saved bypassing your Repository implementation. You can get it from session (Repository.Load), change one of its properties, and call session.Flush (at the end of request for example, because Repository pattern doesn't suppose flushing) - and your changes will be successfully processed in db.
You've only mentioned basic CRUD actions. Doing these directly does mean you have to be aware of transactions, flushing and other things that a repository can wrap up, but I guess the value of repositories becomes more apparent when you think about complex retrieval queries.
Imagine then that you do decide to use the NHibernate session directly in your application layer.
You will need to do the equivalent of WHERE clauses and ORDER BYs etc, using either HQL or NHibernate criteria. This means your code has to reference NHibernate, and contains ideas specific to NHibernate. This makes your application hard to test and harder for others unfamiliar with NH to follow. A call to repository.GetCompletedOrders is much more descriptive and reusable than one that includes something like "where IsComplete = true and IsDeleted = false..." etc.
You could use Linq to NHibernate instead, but now you have the situation where you can easily forget that you're working on an IQueryable. You could end up chaining Linq expressions which generate enormous queries when they execute, without realising it (I speak from experience)! Mike Hadlow sparked a conversation on essentially this topic in his post Should my repository expose IQueryable.
N.b. If you don't like having lots of methods on custom repositories for different queries (like GetCompletedOrders), you can use specification parameters (like Get(specification)), which allow you to specify filters, orderings etc. without using data access language.
Going back to the list of benefits of repository that you gave:
Persistence
OO View of the data
Data Access Logic Abstraction
Query Access Logic
You can see that points 3 and 4 are not provided for by using the persistence framework classes directly, especially in real world retrieval scenarios.
Something that has been bugging me since I read an answer on another stackoverflow question (the precise one eludes me now) where a user stated something like "If you're calling the Service Locator, you're doing it wrong."
It was someone with a high reputation (in the hundred thousands, I think) so I tend to think this person might know what they're talking about. I've been using DI for my projects since I first started learning about it and how well it relates to Unit Testing and what not. It's something I'm fairly comfortable with now and I think I know what I'm doing.
However, there are a lot of places where I've been using the Service Locator to resolve dependencies in my project. Once prime example comes from my ModelBinder implementations.
Example of a typical model binder.
public class FileModelBinder : IModelBinder {
public object BindModel(ControllerContext controllerContext,
ModelBindingContext bindingContext) {
ValueProviderResult value = bindingContext.ValueProvider.GetValue("id");
IDataContext db = Services.Current.GetService<IDataContext>();
return db.Files.SingleOrDefault(i => i.Id == id.AttemptedValue);
}
}
not a real implementation - just a quick example
Since the ModelBinder implementation requires a new instance when a Binder is first requested, it's impossible to use Dependency Injection on the constructor for this particular implementation.
It's this way in a lot of my classes. Another example is that of a Cache Expiration process that runs a method whenever a cache object expires in my website. I run a bunch of database calls and what not. There too I'm using a Service Locator to get the required dependency.
Another issue I had recently (that I posted a question on here about) was that all my controllers required an instance of IDataContext which I used DI for - but one action method required a different instance of IDataContext. Luckily Ninject came to the rescue with a named dependency. However, this felt like a kludge and not a real solution.
I thought I, at least, understood the concept of Separation of Concerns reasonably well but there seems to be something fundamentally wrong with how I understand Dependency Injection and the Service Locator Pattern - and I don't know what that is.
The way I currently understand it - and this could be wrong as well - is that, at least in MVC, the ControllerFactory looks for a Constructor for a Controller and calls the Service Locator itself to get the required dependencies and then passes them in. However, I can understand that not all classes and what not have a Factory to create them. So it seems to me that some Service Locator pattern is acceptable...but...
When is it not acceptable?
What sort of pattern should I be on the look out for when I should rethink how I'm using the Service Locator Pattern?
Is my ModelBinder implementation wrong? If so, what do I need to learn to fix it?
In another question along the lines of this one user Mark Seemann recommended an Abstract Factory - How does this relate?
I guess that's it - I can't really think of any other question to help my understanding but any extra information is greatly appreciated.
I understand that DI might not be the answer to everything and I might be going overboard in how I implement it, however, it seems to work the way I expect it to with Unit Testing and what not.
I'm not looking for code to fix my example implementation - I'm looking to learn, looking for an explanation to fix my flawed understanding.
I wish stackoverflow.com had the ability to save draft questions. I also hope whoever answers this question gets the appropriate amount of reputation for answering this question as I think I'm asking for a lot. Thanks, in advance.
Consider the following:
public class MyClass
{
IMyInterface _myInterface;
IMyOtherInterface _myOtherInterface;
public MyClass(IMyInterface myInterface, IMyOtherInterface myOtherInterface)
{
// Foo
_myInterface = myInterface;
_myOtherInterface = myOtherInterface;
}
}
With this design I am able to express the dependency requirements for my type. The type itself isn't responsible for knowing how to instantiate any of the dependencies, they are given to it (injected) by whatever resolving mechanism is used [typically an IoC container]. Whereas:
public class MyClass
{
IMyInterface _myInterface;
IMyOtherInterface _myOtherInterface;
public MyClass()
{
// Bar
_myInterface = ServiceLocator.Resolve<IMyInterface>();
_myOtherInterface = ServiceLocator.Resolve<IMyOtherInterface>();
}
}
Our class is now dependent on creating the specfic instances, but via delegation to a service locator. In this sense, Service Location can be considered an anti-pattern because you're not exposing dependencies, but you are allowing problems which can be caught through compilation to bubble up into runtime. (A good read is here). You hiding complexities.
The choice between one or the other really depends on what your building on top of and the services it provides. Typically if you are building an application from scratch, I would choose DI all the time. It improves maintainability, promotes modularity and makes testing types a whole lot easier. But, taking ASP.NET MVC3 as an example, you could easily implement SL as its baked into the design.
You can always go for a composite design where you could use IoC/DI with SL, much like using the Common Services Locator. You component parts could be wired up through DI, but exposed through SL. You could even throw composition into the mix and use something like the Managed Extensibility Framework (which itself supports DI, but can also be wired to other IoC containers or service locators). It's a big design choice to make, generally my recommendation would be for IoC/DI where possible.
Your specific design I wouldn't say is wrong. In this instance, your code is not responsible for creating an instance of the model binder itself, that's up to the framework so you have no control over that but your use of the service locator could probably be easily changed to access an IoC container. But the action of calling resolve on the IoC container...would you not consider that service location?
With an abstract factory pattern the factory is specialised at creating specific types. You don't register types for resolution, you essentially register an abstract factory and that builds any types that you may require. With a Service Locator it is designed to locate services and return those instances. Similar from an convention point of view, but very different in behaviour.