I'm trying to set up blade unit tests in an MVC Turbine-derived site. The problem is that I can't seem to mock the IServiceLocator interface without hitting the following exception:
System.BadImageFormatException: An attempt was made to load a program with an incorrect format. (Exception from HRESULT: 0x8007000B)
at System.Reflection.Emit.TypeBuilder._TermCreateClass(Int32 handle, Module module)
at System.Reflection.Emit.TypeBuilder.CreateTypeNoLock()
at System.Reflection.Emit.TypeBuilder.CreateType()
at Castle.DynamicProxy.Generators.Emitters.AbstractTypeEmitter.BuildType()
at Castle.DynamicProxy.Generators.Emitters.AbstractTypeEmitter.BuildType()
at Castle.DynamicProxy.Generators.InterfaceProxyWithTargetGenerator.GenerateCode(Type proxyTargetType, Type[] interfaces, ProxyGenerationOptions options)
at Castle.DynamicProxy.DefaultProxyBuilder.CreateInterfaceProxyTypeWithoutTarget(Type interfaceToProxy, Type[] additionalInterfacesToProxy, ProxyGenerationOptions options)
at Castle.DynamicProxy.ProxyGenerator.CreateInterfaceProxyTypeWithoutTarget(Type interfaceToProxy, Type[] additionalInterfacesToProxy, ProxyGenerationOptions options)
at Castle.DynamicProxy.ProxyGenerator.CreateInterfaceProxyWithoutTarget(Type interfaceToProxy, Type[] additionalInterfacesToProxy, ProxyGenerationOptions options, IInterceptor[] interceptors)
at Rhino.Mocks.MockRepository.MockInterface(CreateMockState mockStateFactory, Type type, Type[] extras)
at Rhino.Mocks.MockRepository.CreateMockObject(Type type, CreateMockState factory, Type[] extras, Object[] argumentsForConstructor)
at Rhino.Mocks.MockRepository.Stub(Type type, Object[] argumentsForConstructor)
at Rhino.Mocks.MockRepository.<>c__DisplayClass1`1.<GenerateStub>b__0(MockRepository repo)
at Rhino.Mocks.MockRepository.CreateMockInReplay<T>(Func`2 createMock)
at Rhino.Mocks.MockRepository.GenerateStub<T>(Object[] argumentsForConstructor)
at XXX.BladeTest.SetUp()
Everything I search for regarding this error leads me to 32-bit vs. 64-bit DLL compilation issues, but MVC Turbine uses the service locator facade everywhere and we haven't had any other issues, just with using Rhino Mocks to attempt mocking it.
It blows up on the second line of this NUnit set up method:
IRotorContext _context;
IServiceLocator _locator;
[SetUp]
public void SetUp()
{
_context = MockRepository.GenerateStub<IRotorContext>();
_locator = MockRepository.GenerateStub<IServiceLocator>();
_context.Expect(x => x.ServiceLocator).Return(_locator);
}
Just a quick aside; I've tried implementing a fake implementing IServiceLocator, thinking that I could just keep track of calls to the type registration methods. This won't work in our setup, because we extend the service locator's interface in such a way that if the type isn't Unity-based, the registration logic is not invoked.
This has been fixed in Moq v4.0 beta. The issue was in Castle DynamicProxy 2.1 when creating dynamic proxies for interfaces with generic constraints.
http://code.google.com/p/moq/issues/detail?id=177
Yes, I've ran into the same issue with RhinoMocks while testing the runtime for Turbine also. I hate to say it but I worked around the issue by providing my own fake for IServiceLocator for where I needed, but as you explained, you can't do that. :(
I'm not following this piece from your question, "if the type isn't Unity-based"??
Related
I want to use Agatha RRSL with my implementation of the StructureMap 3.0 wrapper to Agatha's IoC container. Agatha has NuGet packages with StructureMap 2.6 which I do not like.
I started by copy/pasting the code from Agatha.StructureMap source code and proceeded to make the changes to use 3.0 StructureMap.
The issue I now have is that I get a StructureMapException
StructureMap.StructureMapBuildPlanException occurred
_HResult=-2146233088
_message=Unable to create a build plan for concrete type Agatha.Common.WCF.RequestProcessorProxy
HResult=-2146233088
IsTransient=false
Message=Unable to create a build plan for concrete type Agatha.Common.WCF.RequestProcessorProxy
new RequestProcessorProxy(InstanceContext, String endpointConfigurationName, String remoteAddress)
┗ InstanceContext = **Default**
String endpointConfigurationName = Required primitive dependency is not explicitly defined
String remoteAddress = Required primitive dependency is not explicitly defined
Source=StructureMap
Context=new RequestProcessorProxy(InstanceContext, String endpointConfigurationName, String remoteAddress)
┗ InstanceContext = **Default**
String endpointConfigurationName = Required primitive dependency is not explicitly defined
String remoteAddress = Required primitive dependency is not explicitly defined
Title=Unable to create a build plan for concrete type Agatha.Common.WCF.RequestProcessorProxy
StackTrace:
at StructureMap.Pipeline.ConstructorInstance`1.ToBuilder(Type pluginType, Policies policies) in c:\BuildAgent\work\996e173a8ceccdca\src\StructureMap\Pipeline\ConstructorInstance.cs:line 83
InnerException:
This looks to me as though the constructor StructureMap thinks it needs to use, but views as not properly configured, is the one with multiple parameters. In reality I need it to use the parameterless constructor.
However I think I've configured the constructor properly. Here is the code I use to configure a parameterless constructor for RequestProcessorProxy:
structureMapContainer.Configure(x => x.ForConcreteType<RequestProcessorProxy>().Configure.SelectConstructor(() => new RequestProcessorProxy()));
What may have gone wrong?
Just as heads up, I'm new to both StructureMap and Agatha so I may have misunderstood any or all of the above...
I've never used SelectConstructor so don't know how to make it working with it but if you want to make SM to use parameterless constructor then you can do it like this when you resolve concrete type:
var container =
new Container(
c => c.For<RequestProcessorProxy>().Use(() => new RequestProcessorProxy()));
or like this when you are resolving it by the interface:
var container =
new Container(
c => c.For<IRequestProcessor>().Use(() => new RequestProcessorProxy()));
I am not familiar with Agatha RRSL at all so I don't know whether I used good interface.
Hope this helps!
We're trying to use MEF 2 with ASP.NET MVC 4 to support an extensible application. There are really 2 parts to this question (hope that's okay SO gods):
How do we use Microsoft.Composition and the MVC container code (MEF/MVC demo source) to replace Ninject as our DI for ICoreService, ICoreRepository, IUnitOfWork, and IDbContext?
It looks like we can't use both Ninject and the MVC container at the same time (I'm sure many are saying "duh"), so we'd like to go with MEF, if possible. I tried removing Ninject and setting [Export] attributes on each of the relevant implementations, spanning two assemblies in addition to the web project, but Save() failed to persist with no errors. I interpreted that as a singleton issue, but could not figure out how to sort it out (incl. [Shared]).
How do we load multiple assemblies dynamically at runtime?
I understand how to use CompositionContainer.AddAssemblies() to load specific DLLs, but for our application to be properly extensible, we require something more akin to how I (vaguely) understand catalogs in "full" MEF, which have been stripped out from the Microsoft.Composition package (I think?); to allow us to load all IPluggable (or whatever) assemblies, which will include their own UI, service, and repository layers and tie in to the Core service/repo too.
EDIT 1
A little more reading solved the first problem which was, indeed, a singleton issue. Attaching [Shared(Boundaries.HttpRequest)] to the CoreDbContext solved the persistence problem. When I tried simply [Shared], it expanded the 'singletonization' to the Application level (cross-request) and threw an exception saying that the edited object was already in the EF cache.
EDIT 2
I used the iterative assembly loading "meat" from Nick Blumhardt's answer below to update my Global.asax.cs code. The standard MEF 2 container from his code did not work in mine, probably because I'm using the MEF 2(?) MVC container. Summary: the code listed below now works as desired.
CoreDbContext.cs (Data.csproj)
[Export(typeof(IDbContext))]
[Shared(Boundaries.HttpRequest)]
public class CoreDbContext : IDbContext { ... }
CoreRepository.cs (Data.csproj)
[Export(typeof(IUnitOfWork))]
[Export(typeof(ICoreRepository))]
public class CoreRepository : ICoreRepository, IUnitOfWork
{
[ImportingConstructor]
public CoreRepository(IInsightDbContext context)
{
_context = context;
}
...
}
CoreService.cs (Services.csproj)
[Export(typeof(ICoreService))]
public class CoreService : ICoreService
{
[ImportingConstructor]
public CoreService(ICoreRepository repository, IUnitOfWork unitOfWork)
{
_repository = repository;
_unitOfWork = unitOfWork;
}
...
}
UserController.cs (Web.csproj)
public class UsersController : Controller
{
[ImportingConstructor]
public UsersController(ICoreService service)
{
_service = service;
}
...
}
Global.asax.cs (Web.csproj)
public class MvcApplication : System.Web.HttpApplication
{
protected void Application_Start()
{
CompositionProvider.AddAssemblies(
typeof(ICoreRepository).Assembly,
typeof(ICoreService).Assembly,
);
// EDIT 2 --
// updated code to answer my 2nd question based on Nick Blumhardt's answer
foreach (var file in System.IO.Directory.GetFiles(Server.MapPath("Plugins"), "*.dll"))
{
try
{
var name = System.Reflection.AssemblyName.GetAssemblyName(file);
var assembly = System.Reflection.Assembly.Load(name);
CompositionProvider.AddAssembly(assembly);
}
catch
{
// You'll need to craft exception handling to
// your specific scenario.
}
}
}
}
If I understand you correctly, you're looking for code that will load all assemblies from a directory and load them into the container; here's a skeleton for doing that:
var config = new ContainerConfiguration();
foreach (var file in Directory.GetFiles(#".\Plugins", "*.dll"))
{
try
{
var name = AssemblyName.GetAssemblyName(file);
var assembly = Assembly.Load(name);
config.WithAssembly(assembly);
}
catch
{
// You'll need to craft exception handling to
// your specific scenario.
}
}
var container = config.CreateContainer();
// ...
Hammett discusses this scenario and shows a more complete version in F# here: http://hammett.castleproject.org/index.php/2011/12/a-decent-directorycatalog-implementation/
Note, this won't detect assemblies added to the directory after the application launches - Microsoft.Composition isn't intended for that kind of use, so if the set of plug-ins changes your best bet is to detect that with a directory watcher and prompt the user to restart the app. HTH!
MEF is not intended to be used as DI framework. Which means that you should separate your "plugins" (whatever they are) composition from your infrastructure dependencies, and implement the former via MEF and the latter via whatever DI framework you prefer.
I think there are a little misunderstandings on what MEF can and can't do.
Originally MEF was conceived as purely an extensibility architecture, but as the framework evolved up to its first release, it can be fully supported as a DI container also. MEF will handle dependency injection for you, and does so through it's ExportProvider architecture. It is also entirely possible to use other DI frameworks with MEF. So in reality there are a number of ways things could be achieved:
Build a NinjectExportProvider that you can plug into MEF, so when MEF is searching for available exports, it will be able to interrogate your Ninject container.
Use an implementation of the Common Services Locator pattern to bridge between MEF and Ninject or vice versa.
Because you are using MEF for the extensibility, you'll probably want to use the former, as this exposes your Ninject components to MEF, which in turn exposes them to your plugins.
The other thing to consider, which is a bit disappointing, is in reality there isn't a lot of room for automagically plugging in of features ala Wordpress on ASP.NET. ASP.NET is a compiled and managed environment, and because of that you either resort to late-binding by loading assemblies manually at runtime, or you restart the application to pick up the new plugins, which sort of defeats the object of being able to plug new extensions in through the application.
My advice, is plan your architecture to pick up any extensibility points as startup and assume that any core changes will require a deployment and application restart.
In terms of the direct questions asked:
The CompositionProvider accepts in instance of ContainerConfiguration which is used internally to create the CompositionContainer used by the provider. So you could use this as the point by which you customise how you want your container to be instantiated. The ContainerConfiguration supports a WithProvider method:
var configuration = new ContainerConfiguration().WithProvider(new NinjectExportDescriptorProvider(kernel));
CompositionProvider.SetConfiguration(configuration);
Where NinjectExportDescriptorProvider might be:
public class NinjectExportDescriptorProvider: ExportDescriptorProvider
{
private readonly IKernel _kernel;
public NinjectExportDescriptorProvider(IKernel kernel)
{
if (kernel == null) throw new ArgumentNullException("kernel");
_kernel = kernel;
}
public override IEnumerable<ExportDescriptorPromise> GetExportDescriptors(
CompositionContract contract, DependencyAccessor dependencyAccessor)
{
var type = contract.ContractType;
if (!_kernel.GetBindings(type).Any())
return NoExportDescriptors;
return new[] {
new ExportDescriptorPromise(
contract,
"Ninject Kernel",
true, // Hmmm... need to consider this, setting it to true will create it as a shared part, false as new instance each time,
NoDependencies,
_ => ExportDescriptor.Create((c, o) => _kernel.Get(type), NoMetadata)) };
}
}
}
Note: I have not tested this, this is all theory, and is based on the example AppSettingsExportDescriptorProvider at: http://mef.codeplex.com/wikipage?title=ProgrammingModelExtensions
It's different from using the standard ExportProvider, because using the CompostionProvider is built around lightweight composition. But essentially you're wrapping up access to your Ninject kernel and making it available to your CompositionContainer.
As with adding a specific new provider (see above), you can use the ContainerConfiguration to read the available assemblies, probably something like:
var configuration = new ContainerConfiguration().WithAssemblies(AppDomain.GetAssemblies())
Again, I haven't tested all of this, but I hope it at least points you in the right direction.
I'm adding structuremap to my project for DI/IOC. I built a demo project to get familiar with it before adding it to my application. After getting it working in the demo I started moving it into my app.
I also use glimpse, and that seems to be the only thing causing problems since the structure map addition (so far).
I followed a pretty basic SM tutorial and the NuGet package, and at this point I'm not even injecting any dependencies yet. Just getting everything wired up.
Here's my application_start
IContainer container = new Container(x =>
{
x.For<IControllerActivator>().Use<StructureMapControllerActivator>();
});
DependencyResolver.SetResolver(new SmDependencyResolver(container));
If I disable glimpse, my application works as it did before. I'd be ready to start doing DI. But if I leave glimpse enabled I get a null object exception. Here's the stack trace, I'm not having much luck following it.
System.NullReferenceException: Object reference not set to an instance of an object.
at Glimpse.Mvc3.Interceptor.ActionInvokerProxyGenerationHook.NonProxyableMemberNotification(Type type, MemberInfo memberInfo)
at Castle.DynamicProxy.Contributors.MembersCollector.AcceptMethod(MethodInfo method, Boolean onlyVirtuals, IProxyGenerationHook hook)
at Castle.DynamicProxy.Contributors.ClassMembersCollector.GetMethodToGenerate(MethodInfo method, IProxyGenerationHook hook, Boolean isStandalone)
at Castle.DynamicProxy.Contributors.MembersCollector.AddMethod(MethodInfo method, IProxyGenerationHook hook, Boolean isStandalone)
at Castle.DynamicProxy.Contributors.MembersCollector.AddProperty(PropertyInfo property, IProxyGenerationHook hook)
at Castle.DynamicProxy.Contributors.MembersCollector.CollectProperties(IProxyGenerationHook hook)
at Castle.DynamicProxy.Contributors.MembersCollector.CollectMembersToProxy(IProxyGenerationHook hook)
at Castle.DynamicProxy.Contributors.ClassProxyTargetContributor.<CollectElementsToProxyInternal>d__2.MoveNext()
at Castle.DynamicProxy.Contributors.CompositeTypeContributor.CollectElementsToProxy(IProxyGenerationHook hook, MetaType model)
at Castle.DynamicProxy.Generators.ClassProxyGenerator.GenerateType(String name, Type[] interfaces, INamingScope namingScope)
at Castle.DynamicProxy.Generators.BaseProxyGenerator.ObtainProxyType(CacheKey cacheKey, Func`3 factory)
at Castle.DynamicProxy.ProxyGenerator.CreateClassProxy(Type classToProxy, Type[] additionalInterfacesToProxy, ProxyGenerationOptions options, Object[] constructorArguments, IInterceptor[] interceptors)
at Glimpse.Mvc3.Extensions.ControllerExtentions.TrySetActionInvoker(IController iController, IGlimpseLogger logger)
at System.Web.Mvc.MvcHandler.ProcessRequestInit(HttpContextBase httpContext, ref IController controller, ref IControllerFactory factory)
at System.Web.Mvc.MvcHandler.<>c__DisplayClass6.<BeginProcessRequest>b__2()
at System.Web.Mvc.SecurityUtil.<>c__DisplayClassb`1.<ProcessInApplicationTrust>b__a()
at System.Web.Mvc.SecurityUtil.ProcessInApplicationTrust(Func`1 func)
at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute()
at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, ref Boolean completedSynchronously)
Based on your callstack, it looks like you are using an older version of Glimpse.
This bug was fixed in version 0.85 of Glimpse, available now on NuGet and CodePlex. Upgrading should fix your problem.
I have been doing my first Test Driven Development project recently and have been learning Ninject and MOQ. This is my first attempt at all this. I've found the TDD approach has been thought provoking, and Ninject and MOQ have been great. The project I am working on has not particularly been the best fit for Ninject as it is a highly configurable C# program that is designed to test the use of a web service interface.
I have broken it up into modules and have interfaces all over the shop, but I am still finding that I am having to use lots of constructor arguments when getting an implementation of a service from the Ninject kernel. For example;
In my Ninject module;
Bind<IDirEnum>().To<DirEnum>()
My DirEnum class;
public class DirEnum : IDirEnum
{
public DirEnum(string filePath, string fileFilter,
bool includeSubDirs)
{
....
In my Configurator class (this is the main entry point) that hooks all the services together;
class Configurator
{
public ConfigureServices(string[] args)
{
ArgParser argParser = new ArgParser(args);
IDirEnum dirEnum = kernel.Get<IDirEnum>(
new ConstructorArgument("filePath", argParser.filePath),
new ConstructorArgument("fileFilter", argParser.fileFilter),
new ConstructorArgument("includeSubDirs", argParser.subDirs)
);
filePath, fileFilter and includeSubDirs are command line options to the program. So far so good. However, being a conscientious kind of guy, I have a test covering this bit of code. I'd like to use a MOQ object. I have created a Ninject module for my tests;
public class TestNinjectModule : NinjectModule
{
internal IDirEnum mockDirEnum {set;get};
Bind<IDirEnum>().ToConstant(mockDirEnum);
}
And in my test I use it like this;
[TestMethod]
public void Test()
{
// Arrange
TestNinjectModule testmodule = new TestNinjectModule();
Mock<IDirEnum> mockDirEnum = new Mock<IDirEnum>();
testModule.mockDirEnum = mockDirEnum;
// Act
Configurator configurator = new Configurator();
configurator.ConfigureServices();
// Assert
here lies my problem! How do I test what values were passed to the
constructor arguments???
So the above shows my problem. How can I test what arguments were passed to the ConstructorArguments of the mock object? My guess is that Ninject is dispensing of the ConstuctorArguments in this case as the Bind does not require them? Can I test this with a MOQ object or do I need to hand code a mock object that implements DirEnum and accepts and 'records' the constructor arguments?
n.b. this code is 'example' code, i.e. I have not reproduced my code verbatim, but I think I have expressed enough to hopefully convey the issues? If you need more context, please ask!
Thanks for looking. Be gentle, this is my first time ;-)
Jim
There are a few problems with the way you designed your application. First of all, you are calling the Ninject kernel directly from within your code. This is called the Service Locator pattern and it is considered an anti-pattern. It makes testing your application much harder and you are already experiencing this. You are trying to mock the Ninject container in your unit test, which complicates things tremendously.
Next, you are injecting primitive types (string, bool) in the constructor of your DirEnum type. I like how MNrydengren states it in the comments:
take "compile-time" dependencies
through constructor parameters and
"run-time" dependencies through method
parameters
It's hard for me to guess what that class should do, but since you are injecting these variables that change at run-time into the DirEnum constructor, you end up with a hard to test application.
There are multiple ways to fix this. Two that come in mind are the use of method injection and the use of a factory. Which one is feasible is up to you.
Using method injection, your Configurator class will look like this:
class Configurator
{
private readonly IDirEnum dirEnum;
// Injecting IDirEnum through the constructor
public Configurator(IDirEnum dirEnum)
{
this.dirEnum = dirEnum;
}
public ConfigureServices(string[] args)
{
var parser = new ArgParser(args);
// Inject the arguments into a method
this.dirEnum.SomeOperation(
argParser.filePath
argParser.fileFilter
argParser.subDirs);
}
}
Using a factory, you would need to define a factory that knows how to create new IDirEnum types:
interface IDirEnumFactory
{
IDirEnum CreateDirEnum(string filePath, string fileFilter,
bool includeSubDirs);
}
Your Configuration class can now depend on the IDirEnumFactory interface:
class Configurator
{
private readonly IDirEnumFactory dirFactory;
// Injecting the factory through the constructor
public Configurator(IDirEnumFactory dirFactory)
{
this.dirFactory = dirFactory;
}
public ConfigureServices(string[] args)
{
var parser = new ArgParser(args);
// Creating a new IDirEnum using the factory
var dirEnum = this.dirFactory.CreateDirEnum(
parser.filePath
parser.fileFilter
parser.subDirs);
}
}
See how in both examples the dependencies get injected into the Configurator class. This is called the Dependency Injection pattern, opposed to the Service Locator pattern, where the Configurator asks for its dependencies by calling into the Ninject kernel.
Now, since your Configurator is completely free from any IoC container what so ever, you can now easily test this class, by injecting a mocked version of the dependency it expects.
What is left is to configure the Ninject container in the top of your application (in DI terminology: the composition root). With the method injection example, your container configuration would stay the same, with the factory example, you will need to replace the Bind<IDirEnum>().To<DirEnum>() line with something as follows:
public static void Bootstrap()
{
kernel.Bind<IDirEnumFactory>().To<DirEnumFactory>();
}
Of course, you will need to create the DirEnumFactory:
class DirEnumFactory : IDirEnumFactory
{
IDirEnum CreateDirEnum(string filePath, string fileFilter,
bool includeSubDirs)
{
return new DirEnum(filePath, fileFilter, includeSubDirs);
}
}
WARNING: Do note that factory abstractions are in most cases not the best design, as explained here.
The last thing you need to do is to create a new Configurator instance. You can simply do this as follows:
public static Configurator CreateConfigurator()
{
return kernel.Get<Configurator>();
}
public static void Main(string[] args)
{
Bootstrap():
var configurator = CreateConfigurator();
configurator.ConfigureServices(args);
}
Here we call the kernel. Although calling the container directly should be prevented, there will always at least be one place in your application where you call the container, simply because it must wire everything up. However, we try to minimize the number of times the container is called directly, because it improves -among other things- the testability of our code.
See how I didn't really answer your question, but showed a way to work around the problem very effectively.
You might still want to test your DI configuration. That's very valid IMO. I do this in my applications. But for this, you often don't need the DI container, or even if your do, this doesn't mean that all your tests should have a dependency on the container. This relationship should only exist for the tests that test the DI configuration itself. Here is a test:
[TestMethod]
public void DependencyConfiguration_IsConfiguredCorrectly()
{
// Arrange
Program.Bootstrap();
// Act
var configurator = Program.CreateConfigurator();
// Assert
Assert.IsNotNull(configurator);
}
This test indirectly depends on Ninject and it will fail when Ninject is not able to construct a new Configurator instance. When you keep your constructors clean from any logic and only use it for storing the taken dependencies in private fields, you can run this, without the risk of calling out to a database, web service or what so ever.
I hope this helps.
Recently I've switched to Ninject 2.0 release and started getting the following error:
Error occured: Error activating SomeController
More than one matching bindings are available.
Activation path:
1) Request for SomeController
Suggestions:
1) Ensure that you have defined a binding for SomeController only once.
However, I'm unable to find certain reproduction path. Sometimes it occurs, sometimes it does not.
I'm using NinjectHttpApplication for automatic controllers injection. Controllers are defined in separate assembly:
public class App : NinjectHttpApplication
{
protected override IKernel CreateKernel()
{
INinjectModule[] modules = new INinjectModule[] {
new MiscModule(),
new ProvidersModule(),
new RepositoryModule(),
new ServiceModule()
};
return new StandardKernel(modules);
}
protected override void OnApplicationStarted()
{
RegisterRoutes(RouteTable.Routes);
RegisterAllControllersIn("Sample.Mvc");
base.OnApplicationStarted();
}
/* ............. */
}
Maybe someone is familiar with this error.
Any advice?
I finally figured this issue out recently. Apparently, the NinjectHttpApplication.RegisterAllControllersIn() function doesn't do all of the proper bindings needed. It binds your concrete controller implementations to IController requests. For example, if you have a controller class called SampleMvcController, which inherits from System.Web.Mvc.Controller. It would do the following named binding during application start:
kernel.Bind<IController>().To(SampleMvcController).InTransientScope().Named("SampleMvc");
But when debugging the NinjectControllerFactory, I find that request are being made for the Ninject Kernel to return an object for the class "SampleMvcController", not for a concrete implementation of IController, using the named binding of "SampleMvc".
Because of this, when the first web request that involves the SampleMvcController is made, it creates a binding of SampleMvcController to itself. This is not thread safe though. So if you have several web requests being made at once, the bindings can potentially happen more than once, and now you are left with this error for having multiple bindings for the SampleMvcController.
You can verify this by quickly refreshing an MVC URL, right after causing your web application to restart.
The fix:
The simplest way to fix this issue is to create a new NinjectModule for your controller bindings, and to load this module during application start. Within this module, you self bind each of your defined controllers, like so:
class ControllerModule : StandardModule {
public override Load() {
Bind<SampleMvcController>().ToSelf();
Bind<AnotherMvcController>().ToSelf();
}
}
But if you don't mind changing the Ninject source code, you can modify the RegisterAllControllersIn() function to self bind each controller it comes across.
I have been dealing with this problem for months. I tried so many options but was unable to come to a solution. I knew that it was a threading problem because it would only occur when there was a heavy load on my site. Just recently a bug was reported and fixed in the ninject source code that solves this problem.
Here is a reference to the issue. It was fixed in build 2.1.0.70 of the Ninject source. The key change was in KernelBase.cs by removing the line
context.Plan = planner.GetPlan(service);
and replacing it with
lock (planner)
{
context.Plan = planner.GetPlan(service);
}
To use this new build with MVC you will need to get the latest build of Ninject then get the latest build of ninject.web.mvc. Build ninject.web.mvc with the new Ninject build.
I have been using this new build for about a week with a heavy load and no problems. That is the longest it has gone without a problem so I would consider this to be a solution.
Are you sure you really are creating a single completely new Kernel from scratch in your OnApplicationStarted every time it's invoked ? If you're not and you're actually creating it once but potentially running the registration bit twice. Remember that you're not guaranteed to only ever have one App class instantiated ever within a given AppDomain.
My answer was a bit more obvious.
I had declared the binding for one of my controllers more than once during refactor of my code.
I added this to my global.ascx.cs file:
public void RegisterAllControllersInFix(Assembly assembly)
{
RegisterAllControllersInFix(assembly, GetControllerName);
}
public void RegisterAllControllersInFix(Assembly assembly, Func<Type, string> namingConvention)
{
foreach (Type type in assembly.GetExportedTypes().Where(IsController))
Kernel.Bind(type).ToSelf();
}
private static bool IsController(Type type)
{
return typeof(IController).IsAssignableFrom(type) && type.IsPublic && !type.IsAbstract && !type.IsInterface;
}
private static string GetControllerName(Type type)
{
string name = type.Name.ToLowerInvariant();
if (name.EndsWith("controller"))
name = name.Substring(0, name.IndexOf("controller"));
return name;
}
Then called it from my OnApplicationStarted() method as follows:
RegisterAllControllersIn(Assembly.GetExecutingAssembly());
RegisterAllControllersInFix(Assembly.GetExecutingAssembly());
Difficult to know whether this fixed it though because it's so intermittent.