I'm doing research on software architecture, layering, and looked lots of open source .net projects, like Orchard CMS.
I think Orchard is a good example for some design patterns.
As I know, UI, Services, Repositories and Entities should be in separate assemblies, due to misusing. But in Orchard, (due to being modularity and pluggable) I see service, repository and entity classes and interfaces in same folder and same namespace.
Isn't it an anti-pattern, or is it correct for patterns?
TL;DR: assemblies are not necessarily the right separation device.
No, what's important is that they are separated, not that they are in separate assemblies. Furthermore, the way you would factor things in most applications has to be different from what you do in an extensible CMS. The right separation in an extensible CMS is into decoupled features that can be added and removed at will, whereas regular tiered applications require decoupling of layers so those can be worked on and refactored with minimal risk and impact. The right comparison is actually between one of those applications and a module or feature in Orchard, not with Orchard as a whole. But of course, good practices should be used within modules, and they usually are.
Now separation into assemblies is a separate concern, that is more technical than architectural. You can see an assembly as a container of self-contained code, created for the purpose of code reuse and dynamic linking, but not especially as a way to separate layers. This is why they coincide in Orchard with the unit of code reuse, the module.
Also consider the practical aspect of this: good architectural practices have one main goal, which is to make applications easier and cheaper to maintain (and not, surprisingly (NOT!) to make consultants rich by enabling them to set-up astronaut architectures that only they can understand). A secondary goal is to codify what makes scalable and well-performing applications (although that is a trickier goal as it can easily lead to premature optimization, the root of most software evil).
For that first goal, conceptual separation is the most important, but the way this separation is made is usually not very important.
The secondary goal unfortunately conflicts with the idea of using assemblies as a separation device: Orchard as it is already has dozens of assemblies before you even start to add optional modules. And assemblies do not come for free. They need to be dynamically compiled, loaded, jitted, come with memory overhead, etc. In other terms, for good performance, you'll usually want to reduce the number of assemblies.
If you wanted to separate an Orchard site into assemblies for modules as it is today, and then separate each of these modules into layered assemblies, you would have to multiply the number of modules by the number of layers. That would be hundreds of assemblies to load. Not good. As a matter of facts, we are even considering an option for dynamic compilation to build all modules into a single assembly.
Related
I am confused about this line
Aspect-Oriented Programming and Dependency Injection are very different concepts, but there are limited cases where they fit well together.
from this website
http://www.postsharp.net/blog/post/Aspect-Oriented-Programming-vs-Dependency-Injection
I understand the advantages of DI over AOP, but why aren't they used together more often? why are there only limited cases where they fit together? Is it because of the way AOP is compiled, that makes using both difficult?
How do you define "limited cases"? I myself always use AOP and DI together.
There are basically three ways to apply AOP, which are:
Using code weaving tools such as PostSharp.
Using dynamic interception tools such as Castle Dynamic Proxy.
Using decorators.
The use of DI with code weaving tools doesn't mix and match very well, and I think that's the reason that the Postsharp site states that "there are limited cases where they fit well together". One reason it doesn't mix and match is because Dependency Injection is about loose coupling, while code weaving hard couples your code and the aspects together at compile time. From a perspective of DI, code weaving becomes an anti-pattern. In section 11.2 of our book, Mark and I make this argument very clear. In summary we state:
The aim of DI is to manage Volatile Dependencies by introducing Seams into your application. Theis enables you to centralize the composition of your object graphs inside the Composition Root.
This is the complete opposite of hat you achieve when applying compile-time weaving: is causes Volatile Dependencies to be coupled to your code at compile-time. This makes it impossible to use proper DI techniques and to safely compose complete object graphs in the application's Composition Root. It's for this reason that we say that compile-time weaving is the opposite of DIāusing compile-time weaving on Volatile Dependencies is an anti-pattern. [page 355]
If you use dynamic interception, however, which means applying cross-cutting concerns at runtime by generating decorators on the fly it works great with DI and it is integrated easily with most DI libraries out there, and can be done as well when using Pure DI, which is something we demonstrate in section 11.1.
My personal preference is to use decorators. My systems are designed around a few well defined generic abstractions, and this allows me to apply cross-cutting concerns at almost all places that are important to my system. That leaves me in very rare cases with a few spots where decorators don't work very well, but this is almost always caused by design flaws. Either by my own limitations as a developer or by design flaws in the .NET framework or some other tool. One famous design flaw is the INotifyPropertyChanged interface. You might have guessed it, but in our book we describe this method in a lot of detail. We spend a complete chapter (10) on this topic.
We are building a computational engine where we have a number of objects that interact in performing the computation. The objects have dependencies among themselves and mimic a subset of a real-world system. We are building the computational engine in a phased manner where we incrementally model parts of the system and hence could result in a change in the dependency graph as we progress. We could explicitly state the dependency between the objects in code but this may result in having to change that portion of code in future. Would using IoC alleviate this problem? Or would it be an overkill?
There are several ways that applying dependency injection can be useful:
It allows you to abstract code that needs to be tested in isolation. In your case you might want to split the computational engine in multiple parts to make it easier to test the smaller parts of that engine, or abstract the database engine that the engine is using internally.
It allows that engine to be developed by multiple teams. By depending on the abstraction the other team provides (or you specified for the other team) it allows you to make progress, without being blocked by the progress of the other team.
If the engine consists of smaller parts that must be changable (specification pattern), injecting abstractions for those parts can help achieve this. You could even do this at runtime if you simply depend upon an abstraction.
If however, this computational engine is developed by one team, hasn't got any dependencies on anything that needs to be abstracted (database, file system, etc), and isn't that complex that testing the separate parts would make development and verification easier, using dependency injection in that computational engine might not help.
To make a projet manageable, we break it into sub-projets (class libraries in C#, for instance). Now, in ASP.NET MVC 2, we do have Areas. I'd like to know if Areas have or can serve the same purposes as class libraries? It looks like both are meant to make a project manageable...
Personaly, I'm about to write something bigger. I don't know which way to go: Area vs class library...both?
Thanks for helping
Making a project manageable is really not the point of areas nor class libraries, though they do have that effect when used well.
Generally, the purpose of a class library is more about creating a stand-alone library of code that all serves some inter-related purpose. The point is really that a well used class library represents a collection of code that is maintained, developed, and distributed as a single unit. The big key is the distribution though, since class libraries can be distributed and used in many applications. It is usually a waste of time to split code out into class libraries if those libraries are never distributed, maintained, or developed independently. If they exist just to organize and group code that is otherwise dependent on other code in other libraries then you may be making your code less manageable in the long run; namespaces and folders alone can serve the purpose of keeping code grouped, organized, and manageable.
Areas in MVC are a tad different. Their purpose is to partition large web applications into semi-independent segments that are all hosted in a single project (and thus are part of the same class library; an MVC app is just a fancy kind of class library). So the entire purpose of areas tends to be responsibility. The biggest advantage of areas is that they are useful to split large applications into sections that are maintained and developed by separate teams of developers; or into sections that have widely different infrastructure requirements from other sections of the application.
So in terms of manageability alone, areas are a good idea if your MVC app is large and has distinct functional sections. Class libraries can only be justified if there are other benefits aside from code manageability.
At the most basic level your comparing how C# is compiled into a specific framework feature.
Areas are simply built in routing/finding/searching customizations against so you can separate your app into different folders. You could provide your MVC application with a VirtualPathProvider and use views embedded in class libraries to segment your application but it isn't the standard way of organizing things.
I've been looking into MEF and Portable areas and the pro's and cons of using these in a collaborative programming environment.
I've found the following article
http://www.thegecko.org/index.php/2010/06/pluggable-mvc-2-0-using-mef-and-strongly-typed-views/
which states
MEF was chosen over other solutions
such as Portable Areas as it allows
all plugins to be composed together as
a single site at runtime without the
assemblies needing to reference each
other.
I haven't found any real in depth comparisons between the two technologies, although I did find the following question which didn't have any answers.
http://mef.codeplex.com/Thread/View.aspx?ThreadId=210370
Have spent a while searching, does anyone have any experience with these two technologies and/or insight into situations where one would be preferable over the other?
I've never used Portable Areas, but I do think it is worth mentioning that one big benefit of MEF over other solutions in my mind is simply that MEF is now included in Framework 4.0. If it is true that Portable Areas requires assemblies to reference each other, that would seem to be a big negative compared to MEF (that is if a pluggable architecture is important to you).
What problems does MEF (Managed Extensibility Framework) solves that cannot be solved by existing IoC/DI containers?
The principle purpose of MEF is extensibility; to serve as a 'plug-in' framework for when the author of the application and the author of the plug-in (extension) are different and have no particular knowledge of each other beyond a published interface (contract) library.
Another problem space MEF addresses that's different from the usual IoC suspects, and one of MEFs strengths, is [extension] discovery. It has a lot of, well, extensible discovery mechanisms that operate on metadata you can associate with extensions. From the MEF CodePlex site:
"MEF allows tagging extensions with additonal metadata which facilitates rich querying and filtering"
Combined with an ability to delay-load tagged extensions, being able to interrogate extension metadata prior to loading opens the door to a slew of interesting scenarios and substantially enables capabilities such as [plug-in] versioning.
MEF also has 'Contract Adapters' which allow extensions to be 'adapted' or 'transformed' (from type > to type) with complete control over the details of those transforms. Contract Adapters open up another creative front relative to just what 'discovery' means and entails.
Again, MEFs 'intent' is tightly focused on anonymous plug-in extensibility, something that very much differentiates it from other IoC containers. So while MEF can be used for composition, that's merely a small intersection of its capabilities relative to other IoCs, with which I suspect we'll be seeing a lot of incestuous interplay going forward.
IoC containers focus on those things you know i.e. I know I will use one logger in a Unit Test, and a different Logger in my app. MEF focuses on those things you don't, there are 1 to n loggers that may appear in my system.
Scott Hanselman and I covered this topic in more detail in the recent hanselminutes.
http://www.hanselminutes.com/default.aspx?showID=166