How to configure Simple Injector depending on build configuration - dependency-injection

I want to be able, to configure Simple Injector differently for each developer (for prototyping purposes for example).
The default configuration should be hardcoded, of course.
I have used Unity before, and there I was able to overwrite the hardcoded registrations by an XML configuration file. This config file was not under source control, and so other developers could overwrite the hardcoded registrations with their custom registrations without interfering with others.
The developers should not need to submit their configuration to source control.
Is such a scenario supported by Simple Injector ?
Is there any best practice for such a scenario ?
Does this make sense at all, or is there a better way to achieve what I want?

One of the design decisions for Simple Injector is to not support XML based configuration out-of-the-box. This decision is described here but can be summarizes as:
XML based configuration is brittle, error prone and always provides a
subset of what you can achieve with code based configuration. General
consensus is to use code based configuration as much as possible and
only fall back to file based configuration for the parts of the
configuration that really need to be customizable after deployment.
These are normally just a few registrations since the majority of
changes would still require developer interaction (write unit tests or
recompile for instance). Even for those few lines that do need to be
configurable, it’s a bad idea to require the fully qualified type name
in a configuration file. A configuration switch (true/false or simple
enum) is more than enough. You can read the configured value in your
code based configuration, this allows you to keep the type names in
your code. This allows you to refactor easily, gives you compile-time
support and is much more friendly to the person having to change this
configuration file.
This however doesn't completely satisfy your requirements, since you don't want to "customizable after deployment". Instead, you want to customize per developer.
For this particular case, you shouldn't fall back on XML based configuration IMO. Just as you can exclude xml files using .gitignore, you can do the same with code based configuration files that developers can change, and that will compile with the rest of the application. Here's an example:
// Global.cs
public void Application_Start() {
var container = new Container();
// Default configuration here
container.Options.AllowOverridingRegistrations = true;
DeveloperOverrides.ApplyOverrides(container);
container.Options.AllowOverridingRegistrations = false;
DependencyResolver.Current = new SimpleInjectorDependencyResolver(container);
}
// DeveloperOverrides.cs
public static class DeveloperOverrides {
public static void ApplyOverrides(Container container) {
}
}
These two files can be checked in, where the DeveloperOverrides.ApplyOverrides method is left blank. After that you add the exclusion of the DeveloperOverrides.cs to your .gitignore file.
After this, developers can add their own overrides that are checked by the compiler, but are never checked in into source control:
// DeveloperOverrides.cs
public static class DeveloperOverrides {
public static void ApplyOverrides(Container container) {
container.Register<IMailSender, FakeMailSender>(Lifestyle.Singleton);
}
}

Related

Castle Windsor only execute certain types of Installers across many projects

We have a large monolithic legacy application with around 45 projects with several different composition roots (console, web, api apps). Most component registration is done in the composition roots of the apps and many WindsorInstallers. We want to remove component registration from the composition roots and into WindsorInstallers for each project in the solution so we no longer need to modify the composition roots when we add a new project, each project should be responsible for its own component registration. We are looking to incrementally make this change to our code base because I tried just having Castle Windsor scan all of our assemblies and run all of our Installers, but that caused a myriad of issues that will need to be looked into over time.
With all of that said, we are looking for a way to only run certain installers, so we can go back a fix the broken ones over time, but all new ones will automatically be used. Below is the approach I was headed towards, but cannot figure out or do not even know if it is possible.
All composition roots would have something like this so that all Installers are always ran.
container.Install(
FromAssembly.InDirectory(new AssemblyFilter(HttpRuntime.BinDirectory))
);
However I would like this install code to only run Installers of type IAutoInstaller. In this way I could go back and fix my legacy installers just by changing the interface to IAutoInstaller and then would never need to modify the composition roots.
public interface IAutoInstaller : IWindsorInstaller
{
}
public class ScheduledPaymentInstaller : IAutoInstaller
{
public void Install(IWindsorContainer container, IConfigurationStore store)
{
container.Register(Classes.FromAssemblyNamed("DryFly.ScheduledPayments")
.Pick()
.WithServiceDefaultInterfaces().LifestylePerWebRequest());
}
}
In summary what I am after is a way to auto execute certain installers from the composition root so that when I add new projects I do not need to modify that code. I would just need to add a new Installer to the new project. Is there a different approach to solve this problem or can this be done via Castle Windsor?
It is possible, but not pretty:
var container = new WindsorContainer();
var installers = AppDomain.CurrentDomain
.GetAssemblies() // Load all assemblies in the current application domain
.SelectMany(s => s.GetTypes()) // project all types contained in all assemblies into a single collection
.Where(type => typeof(IAutoInstaller).IsAssignableFrom(type) && type.IsClass) // find all types that implement IAutoInstaller and are classes (this filters out the interface itself)
.Select(Activator.CreateInstance) // project all types into instances - this relies on all of them containing a parameterless constructor
.Cast<IAutoInstaller>(); // project the Ienumerable<object> into an Ienumerable<IAutoInstaller>
foreach (var installer in installers)
{
installer.Install(container, container.Kernel.ConfigurationStore);
}
There is a massive caveat here, which is that your installers must have a parameterless constructor for this to work. If for some reason this isn't the case, your problem becomes a lot harder to solve generically.
Unfortunately, the IWindsorInstaller interface only enforces a method expecting the container itself, and an implementation of IConfigurationStore. So even if you're not using a configuration store, you still have to supply it. Luckily, when you instantiate a WindsorContainer using the default constructor, it wires up a default Kernel with a DefaultConfigurationStore implementation. This allows you to just pass that in.
If, however, you are using a custom Configuration Store (such as when using an XML configuration interpreter), and this configuration store is owned by another container, or a parent or child container related to your current container, and you require access to the values of that particular store in your particular installer, you'll have to pass that particular store reference into the installer.
As you can see from the list of caveats around the store, you are probably safe to just pass in the kernel default, or even new up a DefaultConfigurationStore.

How to test a private function, in Dart?

Say I defined a private function in a dart file hello.dart:
_hello() {
return "world";
}
I want to test it in another file mytest.dart:
library mytest;
import 'dart:unittest/unittest.dart';
main() {
test('test private functions', () {
expect(_hello(), equals("world"));
}
}
But unfortunately, the test code can't be compiled. But I do need to test that private _hello function. Is there any solution?
While I agree that private methods/classes shouldn't be part of your tests, the meta package does provide an #visibleForTesting attribute, and the analyzer will give you a warning if you attempt to use the member outside of its original library or a test. You can use it like this:
import 'package:meta/meta.dart';
#visibleForTesting
String hello() {
return "world";
}
Your tests will now be able to use it without error or warning, but if someone else tries to use it they'll get a warning.
Again, as to the wisdom of doing this is another question - usually if it's something worth testing, it's something that's worth being public (or it'll get tested through your public interfaces and that's what really matters anyway). At the same time, you might just want to have rigorous tests or test driven principles even for your private methods/classes so - Dart lets you this way.
Edit to add: If you're developing a library and your file with #visibleForTesting will be exported, you are essentially adding public API. Someone can consume that with the analyzer turned off (or just ignore the warning), and if you remove it later you may break them.
Several people believe we shouldn't test private directly: it should be tested through the public interface.
An advantage of following this guidance, is that your test won't depend on your implementation. Said differently: if you want to change your private without changing what you expose to the world, then you won't have to touch your tests.
According to this school of though, if your private is important enough to justify a unit test, then it might make sense to extract it in a new class.
Putting all this together, what you could do here, is:
Create a kind of helper class with this hello method as public. You can then easily unit test it
Let your current class use an instance of this helper class
Test the public methods of your current class which relies on _hello: if this private has a bug, it should be catch by those higher level tests
I don't like either of the above answers. dart's private variable test design is very bad. dart's private visibility is based on library, and each .dart file is a library by default, similar language is rust, but rust can write test code directly in the file, there is no private visibility problem, while dart does not allow this.
Again, I don't think #visibleForTesting is a valid solution,
Because #visibleForTesting can only be used to decorate public declarations, it serves as a mere analysis reminder that developers cannot invoke these declarations in other files,
But from a syntax point of view, developers can't use the _ prefix either, so the form, public, private, becomes confusing. and violates dart's own naming rules.
The argument that one should not test private, or that they should be separated into other classes, is like a justification that is completely unacceptable.
First, private exist because they belong to a business logic/model etc. in a contextual relationship, and it does not make logical sense to separate it into another class.
Second, if you must do this, it will greatly increase the complexity of the code, for example, you move to other classes will lose access to the context variables, or you have to pass a separate reference, or have to create an instance of the class, indeed, then you can finally do some mocks, but you also add a layer of abstraction,
It's hard to imagine that if you were to do this for the whole project, you'd probably double your entire code layers.
For now, If you want your dart package to get more than 90% coverage,
you should not define any private.
It sounds harsh, but that's the real story.
[Alternative] No one seems to have mentioned this yet,
Using part / part of to expose the privates, you can define a test-specific .dart file as the public interface to the library(file) to be tested, and use it to expose all the private declarations that need to be tested. you can name them xxx.fortest.dart
But this is more of a psychological solution, since you are still essentially exposing all private variables/methods
But at least, it's better than splitting class,
Also, if one day dart finally solves this problem, we can simply delete these .fortest.dart files.
A suggestion would be to NOT make methods/classes private but to move code, where you want to hide implementation details, to the lib/src folder.
This folder is considered private.
I found this approach on the fuchsia.dev page in this section under "Testing".
If you want to expose those private methods/classes, that are located in the src folder, to the public, you could export them inside your lib/main file.
I tried to import one of my libraries A (projects are libraries) into another library B and couldn't import code that was in the src folder of library A.
According to this StackOverflow answer it could still be possible to access the src folder from A in library B.
From the dart documentation
As you might expect, the library code lives under the lib directory and is public to other packages. You can create any hierarchy under lib, as needed. By convention, implementation code is placed under lib/src. Code under lib/src is considered private; other packages should never need to import src/.... To make APIs under lib/src public, you can export lib/src files from a file that’s directly under lib.

Why does one use dependency injection?

I'm trying to understand dependency injections (DI), and once again I failed. It just seems silly. My code is never a mess; I hardly write virtual functions and interfaces (although I do once in a blue moon) and all my configuration is magically serialized into a class using json.net (sometimes using an XML serializer).
I don't quite understand what problem it solves. It looks like a way to say: "hi. When you run into this function, return an object that is of this type and uses these parameters/data."
But... why would I ever use that? Note I have never needed to use object as well, but I understand what that is for.
What are some real situations in either building a website or desktop application where one would use DI? I can come up with cases easily for why someone may want to use interfaces/virtual functions in a game, but it's extremely rare (rare enough that I can't remember a single instance) to use that in non-game code.
First, I want to explain an assumption that I make for this answer. It is not always true, but quite often:
Interfaces are adjectives; classes are nouns.
(Actually, there are interfaces that are nouns as well, but I want to generalize here.)
So, e.g. an interface may be something such as IDisposable, IEnumerable or IPrintable. A class is an actual implementation of one or more of these interfaces: List or Map may both be implementations of IEnumerable.
To get the point: Often your classes depend on each other. E.g. you could have a Database class which accesses your database (hah, surprise! ;-)), but you also want this class to do logging about accessing the database. Suppose you have another class Logger, then Database has a dependency to Logger.
So far, so good.
You can model this dependency inside your Database class with the following line:
var logger = new Logger();
and everything is fine. It is fine up to the day when you realize that you need a bunch of loggers: Sometimes you want to log to the console, sometimes to the file system, sometimes using TCP/IP and a remote logging server, and so on ...
And of course you do NOT want to change all your code (meanwhile you have gazillions of it) and replace all lines
var logger = new Logger();
by:
var logger = new TcpLogger();
First, this is no fun. Second, this is error-prone. Third, this is stupid, repetitive work for a trained monkey. So what do you do?
Obviously it's a quite good idea to introduce an interface ICanLog (or similar) that is implemented by all the various loggers. So step 1 in your code is that you do:
ICanLog logger = new Logger();
Now the type inference doesn't change type any more, you always have one single interface to develop against. The next step is that you do not want to have new Logger() over and over again. So you put the reliability to create new instances to a single, central factory class, and you get code such as:
ICanLog logger = LoggerFactory.Create();
The factory itself decides what kind of logger to create. Your code doesn't care any longer, and if you want to change the type of logger being used, you change it once: Inside the factory.
Now, of course, you can generalize this factory, and make it work for any type:
ICanLog logger = TypeFactory.Create<ICanLog>();
Somewhere this TypeFactory needs configuration data which actual class to instantiate when a specific interface type is requested, so you need a mapping. Of course you can do this mapping inside your code, but then a type change means recompiling. But you could also put this mapping inside an XML file, e.g.. This allows you to change the actually used class even after compile time (!), that means dynamically, without recompiling!
To give you a useful example for this: Think of a software that does not log normally, but when your customer calls and asks for help because he has a problem, all you send to him is an updated XML config file, and now he has logging enabled, and your support can use the log files to help your customer.
And now, when you replace names a little bit, you end up with a simple implementation of a Service Locator, which is one of two patterns for Inversion of Control (since you invert control over who decides what exact class to instantiate).
All in all this reduces dependencies in your code, but now all your code has a dependency to the central, single service locator.
Dependency injection is now the next step in this line: Just get rid of this single dependency to the service locator: Instead of various classes asking the service locator for an implementation for a specific interface, you - once again - revert control over who instantiates what.
With dependency injection, your Database class now has a constructor that requires a parameter of type ICanLog:
public Database(ICanLog logger) { ... }
Now your database always has a logger to use, but it does not know any more where this logger comes from.
And this is where a DI framework comes into play: You configure your mappings once again, and then ask your DI framework to instantiate your application for you. As the Application class requires an ICanPersistData implementation, an instance of Database is injected - but for that it must first create an instance of the kind of logger which is configured for ICanLog. And so on ...
So, to cut a long story short: Dependency injection is one of two ways of how to remove dependencies in your code. It is very useful for configuration changes after compile-time, and it is a great thing for unit testing (as it makes it very easy to inject stubs and / or mocks).
In practice, there are things you can not do without a service locator (e.g., if you do not know in advance how many instances you do need of a specific interface: A DI framework always injects only one instance per parameter, but you can call a service locator inside a loop, of course), hence most often each DI framework also provides a service locator.
But basically, that's it.
P.S.: What I described here is a technique called constructor injection, there is also property injection where not constructor parameters, but properties are being used for defining and resolving dependencies. Think of property injection as an optional dependency, and of constructor injection as mandatory dependencies. But discussion on this is beyond the scope of this question.
I think a lot of times people get confused about the difference between dependency injection and a dependency injection framework (or a container as it is often called).
Dependency injection is a very simple concept. Instead of this code:
public class A {
private B b;
public A() {
this.b = new B(); // A *depends on* B
}
public void DoSomeStuff() {
// Do something with B here
}
}
public static void Main(string[] args) {
A a = new A();
a.DoSomeStuff();
}
you write code like this:
public class A {
private B b;
public A(B b) { // A now takes its dependencies as arguments
this.b = b; // look ma, no "new"!
}
public void DoSomeStuff() {
// Do something with B here
}
}
public static void Main(string[] args) {
B b = new B(); // B is constructed here instead
A a = new A(b);
a.DoSomeStuff();
}
And that's it. Seriously. This gives you a ton of advantages. Two important ones are the ability to control functionality from a central place (the Main() function) instead of spreading it throughout your program, and the ability to more easily test each class in isolation (because you can pass mocks or other faked objects into its constructor instead of a real value).
The drawback, of course, is that you now have one mega-function that knows about all the classes used by your program. That's what DI frameworks can help with. But if you're having trouble understanding why this approach is valuable, I'd recommend starting with manual dependency injection first, so you can better appreciate what the various frameworks out there can do for you.
As the other answers stated, dependency injection is a way to create your dependencies outside of the class that uses it. You inject them from the outside, and take control about their creation away from the inside of your class. This is also why dependency injection is a realization of the Inversion of control (IoC) principle.
IoC is the principle, where DI is the pattern. The reason that you might "need more than one logger" is never actually met, as far as my experience goes, but the actually reason is, that you really need it, whenever you test something. An example:
My Feature:
When I look at an offer, I want to mark that I looked at it automatically, so that I don't forget to do so.
You might test this like this:
[Test]
public void ShouldUpdateTimeStamp
{
// Arrange
var formdata = { . . . }
// System under Test
var weasel = new OfferWeasel();
// Act
var offer = weasel.Create(formdata)
// Assert
offer.LastUpdated.Should().Be(new DateTime(2013,01,13,13,01,0,0));
}
So somewhere in the OfferWeasel, it builds you an offer Object like this:
public class OfferWeasel
{
public Offer Create(Formdata formdata)
{
var offer = new Offer();
offer.LastUpdated = DateTime.Now;
return offer;
}
}
The problem here is, that this test will most likely always fail, since the date that is being set will differ from the date being asserted, even if you just put DateTime.Now in the test code it might be off by a couple of milliseconds and will therefore always fail. A better solution now would be to create an interface for this, that allows you to control what time will be set:
public interface IGotTheTime
{
DateTime Now {get;}
}
public class CannedTime : IGotTheTime
{
public DateTime Now {get; set;}
}
public class ActualTime : IGotTheTime
{
public DateTime Now {get { return DateTime.Now; }}
}
public class OfferWeasel
{
private readonly IGotTheTime _time;
public OfferWeasel(IGotTheTime time)
{
_time = time;
}
public Offer Create(Formdata formdata)
{
var offer = new Offer();
offer.LastUpdated = _time.Now;
return offer;
}
}
The Interface is the abstraction. One is the REAL thing, and the other one allows you to fake some time where it is needed. The test can then be changed like this:
[Test]
public void ShouldUpdateTimeStamp
{
// Arrange
var date = new DateTime(2013, 01, 13, 13, 01, 0, 0);
var formdata = { . . . }
var time = new CannedTime { Now = date };
// System under test
var weasel= new OfferWeasel(time);
// Act
var offer = weasel.Create(formdata)
// Assert
offer.LastUpdated.Should().Be(date);
}
Like this, you applied the "inversion of control" principle, by injecting a dependency (getting the current time). The main reason to do this is for easier isolated unit testing, there are other ways of doing it. For example, an interface and a class here is unnecessary since in C# functions can be passed around as variables, so instead of an interface you could use a Func<DateTime> to achieve the same. Or, if you take a dynamic approach, you just pass any object that has the equivalent method (duck typing), and you don't need an interface at all.
You will hardly ever need more than one logger. Nonetheless, dependency injection is essential for statically typed code such as Java or C#.
And...
It should also be noted that an object can only properly fulfill its purpose at runtime, if all its dependencies are available, so there is not much use in setting up property injection. In my opinion, all dependencies should be satisfied when the constructor is being called, so constructor-injection is the thing to go with.
I think the classic answer is to create a more decoupled application, which has no knowledge of which implementation will be used during runtime.
For example, we're a central payment provider, working with many payment providers around the world. However, when a request is made, I have no idea which payment processor I'm going to call. I could program one class with a ton of switch cases, such as:
class PaymentProcessor{
private String type;
public PaymentProcessor(String type){
this.type = type;
}
public void authorize(){
if (type.equals(Consts.PAYPAL)){
// Do this;
}
else if(type.equals(Consts.OTHER_PROCESSOR)){
// Do that;
}
}
}
Now imagine that now you'll need to maintain all this code in a single class because it's not decoupled properly, you can imagine that for every new processor you'll support, you'll need to create a new if // switch case for every method, this only gets more complicated, however, by using Dependency Injection (or Inversion of Control - as it's sometimes called, meaning that whoever controls the running of the program is known only at runtime, and not complication), you could achieve something very neat and maintainable.
class PaypalProcessor implements PaymentProcessor{
public void authorize(){
// Do PayPal authorization
}
}
class OtherProcessor implements PaymentProcessor{
public void authorize(){
// Do other processor authorization
}
}
class PaymentFactory{
public static PaymentProcessor create(String type){
switch(type){
case Consts.PAYPAL;
return new PaypalProcessor();
case Consts.OTHER_PROCESSOR;
return new OtherProcessor();
}
}
}
interface PaymentProcessor{
void authorize();
}
** The code won't compile, I know :)
The main reason to use DI is that you want to put the responsibility of the knowledge of the implementation where the knowledge is there. The idea of DI is very much inline with encapsulation and design by interface.
If the front end asks from the back end for some data, then is it unimportant for the front end how the back end resolves that question. That is up to the requesthandler.
That is already common in OOP for a long time. Many times creating code pieces like:
I_Dosomething x = new Impl_Dosomething();
The drawback is that the implementation class is still hardcoded, hence has the front end the knowledge which implementation is used. DI takes the design by interface one step further, that the only thing the front end needs to know is the knowledge of the interface.
In between the DYI and DI is the pattern of a service locator, because the front end has to provide a key (present in the registry of the service locator) to lets its request become resolved.
Service locator example:
I_Dosomething x = ServiceLocator.returnDoing(String pKey);
DI example:
I_Dosomething x = DIContainer.returnThat();
One of the requirements of DI is that the container must be able to find out which class is the implementation of which interface. Hence does a DI container require strongly typed design and only one implementation for each interface at the same time. If you need more implementations of an interface at the same time (like a calculator), you need the service locator or factory design pattern.
D(b)I: Dependency Injection and Design by Interface.
This restriction is not a very big practical problem though. The benefit of using D(b)I is that it serves communication between the client and the provider. An interface is a perspective on an object or a set of behaviours. The latter is crucial here.
I prefer the administration of service contracts together with D(b)I in coding. They should go together. The use of D(b)I as a technical solution without organizational administration of service contracts is not very beneficial in my point of view, because DI is then just an extra layer of encapsulation. But when you can use it together with organizational administration you can really make use of the organizing principle D(b)I offers.
It can help you in the long run to structure communication with the client and other technical departments in topics as testing, versioning and the development of alternatives. When you have an implicit interface as in a hardcoded class, then is it much less communicable over time then when you make it explicit using D(b)I. It all boils down to maintenance, which is over time and not at a time. :-)
Quite frankly, I believe people use these Dependency Injection libraries/frameworks because they just know how to do things in runtime, as opposed to load time. All this crazy machinery can be substituted by setting your CLASSPATH environment variable (or other language equivalent, like PYTHONPATH, LD_LIBRARY_PATH) to point to your alternative implementations (all with the same name) of a particular class. So in the accepted answer you'd just leave your code like
var logger = new Logger() //sane, simple code
And the appropriate logger will be instantiated because the JVM (or whatever other runtime or .so loader you have) would fetch it from the class configured via the environment variable mentioned above.
No need to make everything an interface, no need to have the insanity of spawning broken objects to have stuff injected into them, no need to have insane constructors with every piece of internal machinery exposed to the world. Just use the native functionality of whatever language you're using instead of coming up with dialects that won't work in any other project.
P.S.: This is also true for testing/mocking. You can very well just set your environment to load the appropriate mock class, in load time, and skip the mocking framework madness.

How can you log from a Neo4j Server Plugin?

I'm trying to debug a problem in the Neo4J Server plugin I'm writing. Is there a log I can output to? It's not obvious where or how to do this.
Good question. I think you could use Java Logging? That should be routed into the normal logging system.
Just inject org.neo4j.logging.Log in your class containing implementation of your Neo4j stored procedure.
public class YourProcedures {
#Context
public Transaction tx;
#Context
public Log log;
#Procedure(value = "yourProcedure", mode = Mode.READ)
public Stream<YourResult> yourProcedure(#Name("input") String input) {
log.debug("something");
}
}
Logs are then dumped into standard Neo4j log file.
The level is controlled by GraphDatabaseSettings.store_internal_log_level configuration.
The level can be also changed in runtime. Just inject DependencyResolver bean and define this admin procedure. (The framework has listener hooked to config change which reconfigures the internal logging framework. This is the simplest solution I could find.)
#Context
public DependencyResolver dependencyResolver;
#Procedure(value = "setLogLevel", mode = Mode.DBMS)
#Description("Runtime change of logging level")
public void setLogLevel(#Name("level") String level) {
Config config = dependencyResolver.resolveDependency(Config.class);
config.set(GraphDatabaseSettings.store_internal_log_level, Level.valueOf(level));
}
UPDATE:
This ^ solution works, however it is insufficient when one wants to use logging the way usual in Log4j - different loggers organized in hierarchy, each logger at its own level. The org.neo4j.logging.Log component is just a wrapper of Log4j logger for the GlobalProcedures class. This logger is only one of many loggers in hierarchy. In fact, the wrapper blocks access to richer features of underlying framework. (Unfortunately, to define multiple #Context Log fields in YourProcedures class distinguished by some annotation qualifying logger is also impossible because field injection is driven by Map<Class,instance> so there is only one possible instance to inject for any #Context-annotated field according to field type.)
Solution 1:
Use JUL as in accepted answer. The disadvantage is, JUL redirects log event to underlying Log4j anyway so if logger hierarchy is defined in JUL, Log4j must be set to lowest possible level in order to make JUL levels sensitive.
Solution 2:
Use Log4j directly (i.e. public static final Logger logger = LogManager.getLogger("some.identifier.in.hierarchy") in YourProcedures). There are some issues with redefining configuration programmatically though it is possible, I dropped this solution only because I had some trouble deploying this solution in non-docker environment.
Solution 3: (finally chosen)
I defined custom component LogWithHierarchy (it can be built from own ExtensionFactory loaded using ServiceLoaders - I was inspired in APOC config implementation). This component provides API of the form debug(loggerName, message), info(loggerName, message) etc. The component knows original Log, drills down into its log4j LoggerContext and redirects all logging requests to particular logger in this LoggerContext. Log messages finally end in debug.log. With this solution the original log4j logger hierarchy is fully utilized, levels can be changed dynamically in runtime (setLogLevel must be changed to operate on aforementioned LoggerContext) and still everything is implemented using standard Neo4j plugin support.

Inclusion Handling in MVC 2 / MVCContrib

I'd like to improve my page by combining and minifying javascript and CSS files. Since MVCContrib already contains a project called IncludeHandling, I took a look at that which unfortunately left me with unanswered questions:
There is quite a set of interfaces and objects involved in the process. Now I'm using Ninject.Mvc, but it seems that MvcContrib.IncludeHandling is using some additional (home-brewed?) DI? Can I work around this? Has anybody used this and can share some experiences?
Secondly, advice that is often heard is to put static content on different domains so the request does not contain cookies and the like, making it much easier for the server to handle the request. But how can I combine this with automatic inclusion handling - isn't that necessarily served in the same application?
EDIT: Figured that there is really just a single resolve call in the whole thing, i really wonder why they use DI for that... Thinking about a fork there...
Well, MvcContrib.IncludeHandling uses MvcContrib's DependencyResolver to find the necessary components. It's not very well documented (see the sample site for more detail, although in that case uses a custom injector).
For example, MvcContrib.Castle has a WindsorDependencyResolver for that IoC container that you can mimic to use NInject (there may be something if you Google around).
The initialization is quite verbose, but goes like this (container is the Windsor container, in your case, use NInject):
var httpContextProvider = new HttpContextProvider(HttpContext.Current);
var settings = IIncludeHandlingSettings)ConfigurationManager.GetSection("includeHandling");
container.Register(Component.For(typeof(IIncludeReader)).ImplementedBy(typeof(FileSystemIncludeReader)));
container.Register(Component.For(typeof(IIncludeStorage)).ImplementedBy(typeof(StaticIncludeStorage)));
container.Register(Component.For(typeof(IKeyGenerator)).ImplementedBy(typeof(KeyGenerator)));
container.Register(Component.For(typeof(IIncludeHandlingSettings)).Instance(settings));
container.Register(Component.For(typeof(IHttpContextProvider)).Instance(httpContextProvider));
container.Register(Component.For(typeof(IIncludeCombiner)).ImplementedBy(typeof(IncludeCombiner)));
container.Register(Component.For(typeof(IncludeController)).ImplementedBy(typeof(IncludeController)).LifeStyle.Transient);
DependencyResolver.InitializeWith(new WindsorDependencyResolver(Container));
This way you can register all the dependencies that are needed. Beware that you need the includeHandler section in your web config.
<configSections>
<section name="includeHandling" type="MvcContrib.IncludeHandling.Configuration.IncludeHandlingSectionHandler, MvcContrib.IncludeHandling"/>
</configSections>
<includeHandling>
</includeHandling>
I hope this helped.
Check out the Asp.Net Ajax Minifier. http://www.asp.net/ajaxlibrary/ajaxminquickstart.ashx
It ships with a MS Build task that you can setup where on build it will find and minify Css and Js files in your project...
Here is a Unity version of the DependencyResolver setup. I did it as a Unity container extension.
public class ConfigureMvcContrib : UnityContainerExtension
{
protected override void Initialize()
{
var settings = (IIncludeHandlingSettings)ConfigurationManager.GetSection("includeHandling");
Container
.RegisterFactory<IHttpContextProvider>(c => new HttpContextProvider(HttpContext.Current))
.RegisterFactory<IIncludeReader>(c => new FileSystemIncludeReader(c.Resolve<IHttpContextProvider>()))
.RegisterType<IIncludeStorage, StaticIncludeStorage>()
.RegisterType<IKeyGenerator, KeyGenerator>()
.RegisterFactory<IIncludeCombiner, IncludeCombiner>()
.RegisterInstance<IIncludeHandlingSettings>(settings);
DependencyResolver.InitializeWith(new UnityDependencyResolver(Container));
}
}
It is worth noting that the IncludeHandling setup is not ideal for a web cluster setup as is because of the way it does caching. I had to roll my own controller action that took a list of files to combine and minify. I can provide more info if anyone is interested.

Resources