I'm still gaining experience in dependency injection. I created a new console app project and added two other projects to mimic a real world app. So the three projects in my app are:
MyConsoleApp
MyBusinessService
MyDataRepository
I created all my interfaces so that MyBusinessService is only using interfaces to get data from the repository.
My question is about MyConsoleApp. As I understand it, this is where Ninject will resolve all the dependencies.
Two questions:
I think this means MyConsoleApp will have to reference both MyBusinessService and MyDataRepository. Is this correct?
I think, in MyConsoleApp, I'll have to "manually" bind the IMyDataRepository interface to MyDataRepository concrete class -- see code below. Is this correct? I get a bit confused here because in some tutorials, they're mentioning that Ninject will resolve dependencies "automatically".
I think my code will look like this:
static void Main()
{
// Get Ninject going
var kernel = new StandardKernel();
// Bindings
kernel.bind<IMyDataRepository>().To<MyDataRepository>();
// Some business logic code my console app will process
}
Disclaimer: I am by no means a NInject expert (projects I've been on have tended to use StructureMap).
To answer question 1:
It depends. If your app code has direct dependencies on the data repository, then yes. However, it's considered by many as better practice to have the business layer wrap the data-access layer, and have the main application code depend only on the business layer (this way your application is persistence-ignorant). Your data-access layer would perform only I/O, any validation would be done in the business layer wrapper.
To answer question 2:
Any code that I've seen using any DI/IoC container has calls to Bind() or an equivalent method. IoC containers do resolve dependencies "automatically" in the sense that if you inject an IFoo into a class, the container will automatically construct an appropriate instance of a class that implements IFoo. But in order for the container to do that, you have to instruct the container which class to use.
Consider the scenario where you've got a web app and occasionally connected mobile app that perform (mostly) the same function, which is managing widgets. The web app gets its data from web services. The mobile app uses a SQLite database.
You'd likely have an IWidgetRepository interface to represent data-access operations. But you'd have two implementations, one to interact with the web service, the other to interact with the SQLite DB. And both these implementations would (most likely) reside in your solution or a shared package.
In the web app, you'd bind IWidgetRepository to the web-service implementation; in the mobile app you'd bind it to the SQLite implementation. Since there are multiple implementations of the interface in question -- or, more generally, because we can't assume there will only ever be one implementation -- the container needs to be instructed as to which class/implementation to use.
Incidentally, most applications place this registration code into a separate module, frequently referred to as a "bootstrapper" (NInject actually has a class by that name), and call the bootstrapper from your startup code.
Related
I have used Unity for my last project and was generally pleased. But benchmarks have me thinking I may go with Simple Injector for my next project.
However, Simple Injector does not seem to have an interface for its Container class. This means that anytime I want to use the container in a method, I cannot mock the container for unit testing.
I am confused how a tool that really functions based of interfaces, would not itself make an interface to the container. I know that the classic methods of dependency injection do not need the container for anywhere more than the startup. (The rest uses constructor injection.) But I have found that when the rubber hits the road that cannot always be true. Sometimes you just need the container in order to do a "resolve" in the code.
If I go with Simple Injector then that code seems to gets harder to unit test.
Am I right? Or am I missing something?
Simple Injector does not contain an IContainer abstraction, because:
It would be useless for Simple Injector to define it,
because in case of depending on IContainer instead of Container, your code would in that case still depend on Simple Injector, and this causes a vendor lock-in, which Simple Injector tries to prevent.
Any code you write, apart from the application's Composition Root, should not depend on the container, nor on an abstraction over the container. Both are implementations of the Service Locator anti-pattern.
You should NOT use a DI library when unit testing. When unit testing, you should manually inject all fake or mock objects in the class under test. Using a container only complicates things. Perhaps you are using a container, because manually creating those classes is too cumbersome for you. This might indicate problems with your code (you might be violating the Single Responsibility Principle) or your tests (you might be missing a factory method to create the class under test).
You might use the container for your integration tests, but you
shouldn't have that many integration tests in the first place. The focus should be on unit tests and this should be easy when applying the dependency injection pattern. On top of that, there are better ways of hiding the container from your integration tests, compared to depending on a very wide library-defined interface.
It is trivial to define such interface (plus an adapter) yourself, which justifies not having it in the library. It is your job as application developer to define the right abstractions for your application as stated by the Dependency Inversion Principle. Libraries and frameworks that tend to do this will fail most of the time in providing an abstraction that works for everyone.
The library itself does not use that abstraction and a library should, according to the Framework Design Guidelines, in that case not define such abstraction for you. As stated in the previous point, Simple Injector would get the abstraction wrong anyway.
Last but not least, the Simple Injector container does actually implement System.IServiceProvider which is defined in mscorlib.dll and can be used for retrieving service objects.
I think the answer given here is entirely founded upon accepting that ServiceLocator is an anti-pattern, which in turn I don't believe is globally accepted as true. See Windows Workflow Foundation's Extensions support.
The anti-pattern link (and its two updates) may also be weak... the latest update claims violation of encapsulation ("relieving you of the burden of having to understand every implementation detail of every piece of code in your code base.") while then at the same time claiming that up-front knowledge of dependencies is somehow different for that claim than discovering them via unit tests. Either way, you're going to need to know what to give it.
All in all, if you want to follow the Locator pattern, either leverage its IServiceProvider, or simplify your container population (to a singleton) and create a static wrapper for it.
Suppose I have a BaseForm which depends on an ILogger or IResourceManager or something like that. Currently it resolves the correct implementation of the required service using the service locator which I know is an anti-pattern.
Is using the constructor injection the right way to resolve this kind of dependency?
Do I have to register my BaseForm (and its' derived types) in the container in order to create instances of them with resolved dependencies? Doesn't that complicate everything?
Is it bad to use a static factory wrapped around a service locator?
Unit-testing aside, will I really be punished because of using service locator anti-pattern?
Sorry about asking many questions at once. I've read the following SO questions and many others but reading them only added to my confusion:
How to use Dependency Injection and not Service Locator
What's the difference between the Dependency Injection and Service Locator patterns?
How to avoid Service Locator Anti-Pattern?
If possible, you should always go with dependency injection, since it has a few clear strength. With UI technologies however, it is not always possible to use dependency injection, since some UI technologies (in .NET space, Win Forms and Web Forms, for instance) only allow your UI classes (forms, pages, controls, etc) to have a default constructor. In that case you will have to fall back to something else, which is service locator.
In that case I can give you the following advice:
Only fall back to Service Locator for UI classes that can't be created by your container using dependency injection, and for stuff that you aren't unit testing anyway.
Try to implement as less logic as possible in those UI classes (as Humble objects with only view related stuff). This allows you to unit test as much as possible.
Wrap the container around a static method to hide the container from the rest of the application. Make sure that a call to this static method fails, when the dependency cannot be resolved.
Resolve all dependencies in the (default) constructor of that type. This allows the application to fail fast when one of its dependencies cannot be resolved when that type is created, instead of later on when some button is clicked.
Check during app start-up (or using a unit test), if all those UI types can be created. This saves you from having to go through the whole application (by opening all forms) to see if there is an error in the DI configuration.
When types cannot be built by the container, there is no reason to register them in the container. If they can be created by the container (such as with ASP.NET MVC Controller classes), it can be useful to register them explicitly, because some containers allow you to verify the configuration up front, which will detect configuration errors in those types right away.
Besides unit testing, there are two other important arguments against the use of the Service Locator, which are given by Mark Seemann in his famous blog post Service Locator is an Anti-Pattern:
Service Locator "hides a class’ dependencies, causing run-time errors instead of compile-time errors"
Service Locator is "making the code more difficult to maintain"
Quick question so I start in the right direction. I have a multi project solution with an MVC presentation layer. Currently this layer only knows about an IServices class library. Now if I want to use an IoC it seems like I will have to start adding references to all of the other projects in my solution in the MVC application so that I can configure the IoC.
Is this right or should each layer have its own IoC?
Thanks,
James
Someone must load the .dll files, and someone be able to wire up the IoC.
Typically, the dll loading happens automatically and the IoC wiring happens in a nearly-hardcoded fashion.
You could load libraries dynamically: you can write code that tries to load each dll in a given folder and invoke some kind of GetLibraryDescriptor method. That method tells you that the library provides an implementation for, say, ISomeInterface. Now you can ask the dll to instantiate an object of that class it provides. You'd probably have to bridge this instantiation to the IoC. I believe that such a design is better suited for a service locator.
All this makes sense for shrink-wrap software, but I don't see many benefits for web software.
The only reason not to reference other libraries I see is to make sure nobody directly uses or access the code that is declared in there - a tedious task, a sometimes impossible goal that moves encapsulation to the wrong level. If your classes are well-encapsulated, it shouldn't be necessary.
I have looked at simpler applications like Nerddinner and ContactManager as well as more complicated ones like Kigg. I understand the simpler ones and now I would like to understand the more complex ones.
Usually the simpler applications have repository classes and interfaces (as loosely coupled as they can get) on top of either LINQtoSQL or the Entity Framework. The repositories are called from the controllers to do the necessary data operations.
One common pattern I see when I examine more complicated applications like Kigg or Oxite is the introduction of (I am only scratching the surface here but I have to start somewhere):
IOC DI (in Kigg's case Unity)
Web Request Lifetime manager
Unit of Work
Here are my questions:
I understand that in order to truly have a loosely coupled application you have to use something like Unity. But it also seems like the moment you introduce Unity to the mix you also have to introduce a Web Request Lifetime Manager. Why is that? Why is it that sample applications like Nerddinner do not have a Web Request Lifetime Manager? What exactly does it do? Is it a Unity specific thing?
A second pattern I notice is the introduction of Unit of Work. Again, same question: Why does Nerddinner or ContactManager not use Unit of Work? Instead these applications use the repository classes on top of Linq2Sql or Entity Framework to do the data manipulation. No sign of any Unit of Work. What exactly is it and why should it be used?
Thanks
Below is a example of DI in Nerddiner at the DinnersController level:
public DinnersController()
: this(new DinnerRepository()) {
}
public DinnersController(IDinnerRepository repository) {
dinnerRepository = repository;
}
So am I right to assume that because of the first constructor the controller "owns" the DinnerRepository and it will therefore depend on the lifetime of the controller since it is declared there?
With Linq-to-SQL is used directly, your controller owns the reference to the data context. It's usually a private reference inside the controller, and so is created as part of its construction. There's no need in lifetime management, since it's in one place.
However, when you use IoC container, your data repository are created outside your controller. Since IoC container that creates it for you doesn't know how and how long you're going to use the created object, a lifetime strategy is introduced.
For example, data context (repository) is usually created at the beginning of the web request and destroyed at the end. However, for components that work with external web service, or some static mapper (e.g. logger) there's no need to create them each time. So you may want to say to create them once (i.e. singletone lifestyle).
All this happen because IoC container (like Unity) are designed to handle many situations, and they don't know your specific needs. For example, some applications use "conversation" transactions where NHibernate (or Entity Framework maybe) may last during several pages / web requests. IoC containers allow you to tweak objects lifetime to suit your needs. But as said this comes at price - since there's no single predefined strategy, you have to select one yourself.
Why NerdDinner and other applications do not use more advanced techniques is simply because they are intended to demonstrate MVC features, not advanced usages of some other libraries. I remember an article written to demonstrate one IoC container advanced functionality - this article broke some approved design patterns like separation of concerns - but this wasn't that important because design patterns were not the goal of the article. Same with simple MVC-demonstration-applications - they do not want you, the MVC newcomer, to be lost in IoC labyrinths.
And I would not recommend to look at Oxite as a design reference example:
http://codebetter.com/blogs/karlseguin/archive/2008/12/15/oxite-oh-dear-lord-why.aspx
http://ayende.com/Blog/archive/2008/12/19/oxite-open-exchangable-informative-troubled-engine.aspx
Most if not all of the DI containers touch the concept of life times, I believe. Depending on the scenario involved, you may want the container to always return the same instance of a registered component, while for another component, you may want it to always return a new instance. Most containers also allow you to specify that within a particular context, you want it to return the same instance, etc..
I don't know Unity very well (so far I have used Windsor and Autofac), but I suspect the web request lifetime manager to be an implementation of lifetime strategies where the same instance is provided by the container during the lifetime of a single web request. You will find similar strategies in containers like Windsor.
Finally, I suppose you are referring to Unit of Work. A Unit of Work is in essence a group of actions that you want to succeed or fail as one atomic business transaction. For a more formal description, look at Martin Fowler's definition. It is a concept that has gained more popularity in the context of Domain Driven Design. A unit of work keeps track of the changes you apply in such a transaction, and when the time is right, it commits these changes in one ACID transaction. In NHibernate e.g., the session supports the notion of unit of work and more specifically the change tracking, while in Linq2SQL it is the Context ...
Does anyone have advice or tips on using a web service as the model in an ASP.Net MVC application? I haven't seen anyone writing about doing this. I'd like to build an MVC app, but not tie it to using a specific database, nor limit the database to the single MVC app. I feel a web service (RESTful, most likely ADO.Net Data Services) is the way to go.
How likely, or useful, is it for your MVC app to be decoupled from your database? How often have you seen, in your application lifetime, a change from SQL Server to Oracle? From the last 10 years of projects I've delivered, it's never happened.
Architectures are like onions, they have layers of abstractions above things they depend on. And if you're going to use an RDBMS for storage, that's at the core of your architecture. Abstracting yourself from the DB so you can swap it around is very much a fallacy.
Now you can decouple your database access from your domain, and the repository pattern is one of the ways to do that. Most mature solutions use an ORM these days, so you may want to have a look at NHibernate if you want a mature technology, or ActiveRecord / linq2sql for a simpler active record pattern on top of your data.
Now that you have your data strategy in place, you have a domain of some sort. When you expose data to your client, you can choose to do so through an MVC pattern, where you'll usually send DTOs generated from your domain for rendering, or you can decide to leverage an architecture style like REST to provide more loosely coupled systems, by providing links and custom representations.
You go from tight coupling to looser coupling as you go towards the external layers of your solution.
If your question however was to build an MVC app on top of a REST architecture or web services, and use that as a model... Why bother? If you're going to have a domain model, why not reuse it in your system and your services where it makes sense?
Generating a UI from an MVC app and generating documents needed for a RESTful architecture are two completely different contexts, basing one on top of each other is just going to cause much more pain than needed. And you're sacrificing performance.
Depends on your exact scenario, but remote XML-based service as the model in MVC, from experience, not a good idea, it's probably over-engineering and disregarding the need for a domain to start with.
Edit 2010-11-27; clarified my thoughts, which was really needed.
A web service exposes functionality across different types of applications, not for abstraction in one single application, most often. You are probably thinking more of a way of encapsulating commands and reads in a way that doesn't interfere with your controller/view programming.
Use a service from a service bus if you're after the decoupling and do an async pattern in your async pages. You can see Rhino.ServiceBus, nServiceBus and MassTransit for .Net native implementations and RabbitMQ for something different http://blogs.digitar.com/jjww/2009/01/rabbits-and-warrens/.
Edit: I've had some time to try rabbit out in a way that pushed messages to my service which in turn pushed updates to the book keeping app. RabbitMQ is a message broker, aka a MOM (message oriented middle-ware) and you could use it to send messages to your application server.
You can also simply provide service interfaces. Read Eric Evan's Domain Driven Design for a more detailed description.
REST-ful service interfaces deal a lot with data, and more specifically with addressable resources. It can greatly simplify your programming model and allows great control over output through the HTTP protocol. WCF's upcoming programming model uses true rest as defined in the original thesis, where each document should to some extent provide URIs for continued navigation. Have a look at this.
(In my first version of this post, I lamented REST for being 'slow', whatever that means) REST-based APIs are also pretty much what CouchDB and Riak uses.
ADO.Net is rather crap (!) [N+1 problems with lazy collection because of code-to-implementation, data-access leakage - you always need your db context where your query code is etc] in comparison to for example LightSpeed (commercial) or NHibernate. Spring.Net also allows you to wrap service interfaces in their contain with a web service facade, but (without having browsed it for a while) I think it's a bit too xmly in its configuration.
Edit 1: With ADO.Net here I mean the default "best practice" with DataSets, DataAdapter and iterating lots of rows from a DataReader; it breeds rather ugly and hard-to-debug code. The N+1 stuff, yes, that is about the entity framework.
(Edit 2: EntityFramework doesn't impress me either!)
Edit 1: Create your domain layer in a separate assembly [aka. Core] and provide all domain and application services there, then import this assembly from your specific MVC application. Wrap data access in some DAO/Repository, through an interface in your core assembly, which your Data assembly then references and implements. Wire up interface and implementation with IoC. You can even program something for dynamic service discovery with the above mentioned service buses, to solve for the interfaces. WCF uses interfaces like this and so do most of the above service busses; you can provide a subcomponentresolver in your IoC container to do this automatically.
Edit 2:
A great combo for the above would be CQRS+EventSourcing+ReactiveExtensions. Your write-model would take commands, your domain model would decide whether to accept them, it would push events to the reactive-extensions pipeline, perhaps also over RabbitMQ, which your read-model would consume.
Update 2010-01-02 (edit 1)
The jest of my idea has been codified by something called MindTouch Dream. They have made a screencast where they treat almost all parts of a web application as a (web)-service, which also is exposed with REST.
They have created a highly parallel framework using co-routines to handle this, including their own elastic thread pool.
To all the nay-sayers in this question, in ur face :p! Listen to this screen-cast, especially at 12 minutes.
The actual framework is here.
If you are into this sort of programming, have a look at how monads work and their implementations in C#. You can also read up on CoRoutines.
Happy new year!
Update 2010-11-27 (edit 2)
It turned out CoRoutines got productized with the task parallel library from Microsoft. Your Task now implement the same features, as it implements IAsyncResult. Caliburn is a cool framework that uses them.
Reactive Extensions took the monad comprehensions to the next level of asynchronocity.
The ALT.Net world seems to be moving in the direction I talked about when I wrote this answer the first time, albeit with new types of architectures I knew little of.
You should define your models in a data access agnostic way, e.g. using Repository pattern. Then you can create concrete implementations backed by specific data access technologies (Web Service, SQL, etc).
It really depends on the size of this mvc project. I would say keep the UI and Domain in same running environment if the website is going to be used by a small number of users ( < 5000).
On the other side, if you are planning on a site that is going to be accessed by millions, you have to think distributed and that means you need to build your website in a way that it can scale up/out. That means you might need to use extra servers (Web, application and database).
For this to work nicely, you need to decouple your mvc UI site from the application. The application layer would usually contain your domain model and might be exposed through WCF or a service bus. I would prefer a Service Bus because it is more reliable and might use persistent queues like msmq.
I hope this helps