Reasons to split project into multiple projects? - project-organization

What are common reasons to split a development project (e.g. ASP.NET MVC application) into multiple projects? Code organization can be done via folders just as well. Multiple projects tend to generate circular reference conflicts and increase complexity by having to manage/resolve those.
So, why?

Some reasons are
Encapsulation - By packaging a set of routines into another library, either as a static library or a set of dlls, it becomes a black box. For it to be a good black box, all you need to do is to make sure you give the right inputs and get the right outputs. It helps when you re-use that library. It also enforces certain rules and prevent programming by hacks ('hmm...I'll just make that member function public for now')
Reduces compile time - the library is already complied; you don't have to rebuild it at compile time, just link to it (assuming you are doing C++).
Decoupling - By encasing your classes into a standalone libraries, you can reduce coupling and allows you to reuse the library for other purpose. Likewise, as long as the interface of the library does not change, you can make changes to the library all you like, and others who link to it or refer to it does not need to change their code at all. DLLs are useful in this aspect that no re-compilation is required, but can be tricky to work with if many applications install different versions of the same DLLs. You can update libraries without impacting the client's code. While you can do the same with just folders, there is no explicit mechanism to force this behaviour.
Also, by practicing this discipline of having different libraries, you can also make sure what you have written is generic and decoupled from implementation.
Licensing/Commercialization - Well, I think this is quite obvious.

One possibility is to have a system that a given group (or single developer) can work on independently of the rest of the code. Another is to factor out common utility code that the rest of the system needs -- things like error handling, logging, and common utilities come to mind.
Of course, just when thinking about what goes in a particular function / class / file, where the boundaries are is a matter of art, not science.

One example I can think of is that you might find in developing one project that you end up developing a library which may be of more general use and which deserves to be its own project. For instance maybe you're working on a video game, and you end up writing an audio library that's in no way tied specifically to the game project.

Code reuse. Let's say you have project A and you start a new project B which has many of the same functions as project A. It makes sense to pull out the shared parts of A and make them into a library which can be used by both A and B. This allows you to have the code in both without having to maintain the same code in two places.
Code reuse, inverted. Let's say you have a project which works on one platform. Now you want it to work on two platforms. If you can separate out the platform-dependent code, you can start different projects for each platform-dependent library and then compile your central codebase with different libraries for different platforms.

Some tips about split your project into multiple projects:
One reason for separating a project into multiple class libraries is re-usability. I’ve yet to see the BLL or DAL part of application re-used in another application. This is what textbooks from the 90s used to tell us! But most if not all modern applications are too specific and even in the same enterprise, I’ve never seen the same BLL or DAL parts re-used across multiple applications. Most of the time what you have in those class libraries is purely to serve what the user sees in that particular application, and it’s not something that can be easily re-used (if at all).
Another reason for separating a project into multiple class libraries is about deployability. If you want to independently version and deploy these pieces, it does make sense to go down this path. But this is often a use case for frameworks, not enterprise applications. Entity Framework is a good example. It’s composed of multiple assemblies each focusing on different areas of functionality. We have one core assembly which includes the main artifacts, we have another assembly for talking to a SQL Server database, another one for SQLite and so on. With this modular architecture, we can reference and download only the parts that we need.
Imagine if Entity Framework was only one assembly! It would be one gigantic assembly with lots of code that we won’t need. Also, every time the support team added a new feature or fixed a bug, the entire monolithic assembly would have to be compiled and deployed. This would make this assembly very fragile. If we’re using Entity Framework on top of SQL Server, why should an upgrade because of a bug fix for SQLite impact our application? It shouldn’t! That’s why it’s designed in a modular way.
In most web applications out there, we version and deploy all these assemblies (Web, BLL and DAL) together. So, separating a project into 3 projects do not add any values.
Layers are conceptual. They don’t have a physical representation in
code. Having a folder or an assembly called BLL or DAL doesn’t mean
you have properly layered your application, neither does it mean you
have improved maintainability. Maintainability is about clean code,
small methods, small classes each having a single responsibility and
limited coupling between these classes. Splitting a project with fat
classes and fat methods into BLL/DAL projects doesn’t improve the
maintainability of your software. Assemblies are units of versioning
and deployment. Split a project into multiple projects if you want to
re-use certain parts of that in other projects, or if you want to
independently version and deploy each project.
Source: https://programmingwithmosh.com/csharp/should-you-split-your-asp-net-mvc-project-into-multiple-projects/

Ownership for one thing. If you have developers responsible for different parts of the code base then splitting the project up is the natural thing to do. One would also split projects by functionality. This reduces conflicts and complexity. If it increases, that just means a lack of communication and you are just doing it wrong.

Instead of questioning the value of code in multiple assemblies, question the value of clumping all of your code in one place.
Would you put everything in your kitchen in a single cabinet?
Circular references are circular references, whether they happen between assemblies or within them. The design of the offending components is most likely sub-optimal; eschewing organization via assemblies ironically prevents the compiler from detecting the situation for you.
I don't understand the statement that you can organize code just as well with folders as with projects. If that were true, our operating systems wouldn't have the concept of separate drives; they would just have one giant folder structure. Higher-order organizational patterns express a different kind of intent than simple folders.
Projects say "These concepts are closely related, and only peripherally related to other concepts."

There are some good answers here so I'll try not to repeat.
One benefit of splitting code out to it's own project is to reuse the assembly across multiple applications.
I liked the functional approach mentioned as well (e.g. Inventory, Shipping, etc. could all get their own projects). Another idea is to consider the deployment model. Code shared between layers, tiers, or servers should probably be in it's own common project (or set of projects if finer control is desired). Code earmarked for a certain tier may be in it's own project. e.g. if you had a separate web server and application server then you wouldn't want to deploy the UI code on the application server.
Another reason to split may be to allow small incremental deploys once the application is in production. Let's say you get an emergency production bug that needs to be fixed. If the small change requires a rebuild of the entire (one project) application you might have a hard time justifying a small test cycle to QA. You might have an easier sell if you were deploying only one assembly with a smaller set of functionality.

Related

How to implement Dependency Injection with MVC5 and MEF2 (Convention-Based) in an n-tier application?

I'm starting a new MVC project and have (almost) decided to give the Repository Pattern and Dependency Injection a go. It has taken a while to sift through the variations but I came up with the following structure for my application:
Presentation Layer: ASP.Net MVC front end (views/controllers, etc.)
Services Layer (Business Layer, if you prefer): interfaces and DTOs.
Data Layer: interface implementations and Entity Framework classes.
They are 3 separate projects in my solution. The Presentation Layer only has a reference to the Services Layer. The Data Layer also only has a reference to the Services Layer - so this is basically following Domain Driven Design.
The point of structuring things in this fashion is for separation of concerns, loose-coupling and testability. I'm happy to take advice on improvements if any of this is unreasonable?
The part I am having difficulty with is injecting an interface-implementing object from the Data Layer into the Presentation Layer, which is only aware of the interfaces in the Services Layer. This seems to be exactly what DI is for, and IoC frameworks (allegedly!) make this easier, so I thought I'd try MEF2. But of the dozens of articles and questions and answers I've read over the last few days, nothing seems to actually address this in a way that fits my structure. Almost all are deprecated and/or are simple console application examples that have all the interfaces and classes in the same assembly, knowing all about one another and entirely defying the point of loose-coupling and DI. I have also seen others that require the Data Layer dll being put in the presentation layer bin folder and configuring other classes to look there - again hampering the idea of loose-coupling.
There are some solutions that explore attribute-based registration, but that has supposedly been superseded by Convention-Based registration. I also see a lot of examples injecting an object into a controller constructor, which introduces it's own set of problems to solve. I'm not convinced the controller should know about this actually, and would rather have the object injected into the model, but there may be reasons for this as so many examples seem to follow that path. I haven't looked too deeply into this yet as I'm still stuck trying to get the Data Layer object up into the Presentation Layer anywhere at all.
I believe one of my main problems is not understanding in which layer the various MEF2 things need to go, since every example I've found only uses one layer. There are containers and registrations and catalogues and exporting and importing configurations, and I've been unable to figure out exactly where all this code should go.
The irony is that modern design patterns are supposed to abstract complexity and simplify our task, but I'd be half finished by now if I'd have just referenced the DAL from the PL and got to work on the actual functionality of the application. I'd really appreciate it if someone could say, 'Yep, I get what you're doing but you're missing xyz. What you need to do is abc'.
Thanks.
Yep, I get what you're doing (more or less) but (as far as I can tell) you're missing a) the separation of contracts and implementation types into their own projects/assemblies and b) a concept for configuring the DI-container, i.e. configure which implementations shall be used for the interfaces.
There are unlimited ways of dealing with this, so what I give you is my personal best practice. I've been working that way for quite a bit now and am still happy with it, so I consider it worth sharing.
a. Always have to projects: MyNamespace.Something and MyNamespace.Something.Contracts
In general, for DI, I have two assemblies: One for contracts which holds only interfaces and one for the implementation of these interfaces. In your case, I would probably have five assemblies: Presentation.dll, Services.dll, Services.Contracts.dll, DataAccess.dll and DataAccess.Contracts.dll.
(Another valid option is to put all contracts in one assembly, lets call it Commons.dll)
Obviously, DataAccess.dll references DataAccess.Contracts.dll, as the classes inside DataAccess.dll implement the interfaces inside DataAccess.Contracts.dll. Same for Services.dll and Services.Contracts.dll.
No, the decoupling part: Presentation references Services.Contracts and Data.Contracts. Services references Data.Contracts. As you see, there is no dependency to concrete implementations. This is, what the whole DI thing is about. If you decide to exchange your data access layer, you can swap DataAccess.dll while DataAccess.Contracts.dll stays the same. None of your othe assemblies reference DataAccess.dll directly, so there are no broken links, version conflicts, etc. If this is not clear, try to draw a little dependency diagram. You will see, that there are no arrows pointing to any assemblies whioch don't have .Contracts in their name.
Does this make sense to you? Please ask, if there is something unclear.
b. Choose how to configure the container
You can choose between explicit configuration (XML, etc.), attribute based configuration and convention based registration. While the former is a pain for obvious reasons, I am a fan of the second. I think it is more readable and easy to debug than convention based config, but that is a matter of taste.
Of course, the container kind of bundles all the dependencies, which you have spared in your application architecture. To make clear what I mean, consider a XML config for your case: It will contain 'links' to all of the implementation assemblies DataAccess.dll, .... Still, this doesn't undermine the idea of decoupling. It is clear, that you need to modify the configuration, when an implementation assembly is exchanged.
However, working with attribute or convention based configs, you generally work with the autodiscovery mechanisms you mention: 'Search in all assemblies located in xyz'. This does require to place all assemblies in the applications bin directory. There is nothing wrong about it, as the code needs to be somewhere, right?
What do you gain? Consider you've deployed your application and decide to swap the DataAccess layer. Say, you've chosen convention based config of your DI container. What you can do now is to open a new project in VS, reference the existing DataAccess.Contracts.dll and implement all the interfaces in whatever way you like, as long as you follow the conventions. Then you build the library, call it DataAccess.dll and copy and paste it to your original application's program folder, replacing the old DataAccess.dll. Done, you've swapped the whole implementation without any of the other assemblies even noticing.
I think, you get the idea. It really is a tradeoff, using IoC and DI. I highly recommend to be pragmatic in your design decisions. Don't interface everything, it just gets messy. Decide for yourself, where DI and IoC really makes sense and don't get too influenced by the community's religious discussions. Still, used wisely, IoC and DI are really, really, really powerful!
Well I've spent a couple more days on this (which is now around a week in total) and made little further progress. I am fairly sure I had the container set up correctly with my conventions discovering the correct parts to be mapped etc., but I couldn't figure out what seemed to be the missing link to get the controller DI to activate - I constantly received the error message stating that I hadn't provided a parameterless constructor. So I'm done with it.
I did, however, manage to move forward with my structure and intention to use DI with an IoC. If anyone hits the same wall I did and wants an alternative solution: ditch MEF 2 and go with Unity. The latest version (3.5 at time of writing) has discovery by convention baked in and just works like a treat out of the box - it even has a fairly thorough manual with worked examples. There are other IoC frameworks, but I chose Unity since it's MS supported and fares well in performance benchmarks. Install the bootstrapper package from NuGet and most of the work is done for you. In the end I only had to write one line of code to map my entire DAL (they even create a stub for you so you know where to insert it):
container.RegisterTypes(
AllClasses.FromLoadedAssemblies().Where(t => t.Namespace == "xxx.DAL.Repository"),
WithMappings.FromMatchingInterface,
WithName.Default);

Is it good choice to structure a large Asp.net MVC application using Areas?

Each Area will have its own config etc. So as the areas increases, the complexity and maintainability increases as well. Will it be good choice to modularise or partition and MVC application functionality in to Areas or continue with traditional Controller/View approach?.
Please suggest a common solution or better way to architect a large scale MVC application.
Areas shouldn't be confusing, and certainly aren't redundant. As you say, they allow you to partition your web app into smaller functional groupings. This is extremely helpful when the size of your applications grow and a single application umbrella becomes unwieldy.
As an example, I have just completed a large application that stored promotional data for various retailers across North America. The US and Canada sales teams are isolated, but are executing their tasks in nearly the same business contexts.
It made a lot of sense to partition the US and Canada parts of the web app into Areas, which organized things a lot better for us. Each area could still use the same components where they make sense (repositories, services, etc...), but the isolation Areas brought allowed us to build separate controllers and views specific to each business group, instead of trying to run a bunch of logic checks to accommodate whatever region the user was in.
Here is possible alternative to your approach from "Programming ASP.NET MVC 4" by Jess Chadwick, Todd Snyder, and Hrusikesh Panda:
There are many different approaches to take when designing an ASP.NET
MVC application. A developer can decide to keep all the application’s
components inside the website assembly, or separate components into
different assemblies. In most cases it’s a good idea to separate the
business and data access layers into different assemblies than the
website. This is typically done to better isolate the business model
from the UI and make it easier to write automated test that focuses on
the core application logic. In addition, using this approach makes it
possible to reuse the business and data access layers from other
applications (console applications, websites, web services, etc.).
A common root namespace (e.g., company.{ApplicationName}) should be
consistently used across all assemblies. Each assembly should have a
unique sub-namespace that matches the name of the assembly. Figure 5-4
shows the project structure of the Ebuy reference application. The
functionality for the application has been divided into three
projects: Ebuy.WebSite contains the view, controllers, and other
web-related files; Ebuy.Core contains the business model for the
application; and the CustomExtentions project contains the custom
extensions used by the application for model binding, routing, and
controllers. In addition, the project has two testing projects (not
shown): UnitTests and IntegrationTests.
There is no rule on whether to use Areas or not, it's basically up to solution architect to do an estimate should using areas provide benefit or not.
Our latest project that involved areas, included 3 different types of users working on the website. We used controller naming scheme where controller name matched the resource name (i.e. CategoryController).
However, certain resources could have been accessed by all 3 user groups in completely different manner: one user group could only list resource, other user group could manage them (basic crud) while the 3rd user group (admin-like) could do advanced features such as exporting, importing, etc...
By separating the functionalities in areas, we've reduced security problems by simply decorating the controller in area to request user type specific for that area, in order to avoid mixing of permissions. Doing it on the base controller for the area, made things even more centralized.
That is one reason why we would pick separation of areas.
On the other hand, we've often been in situation where we have request for a high-demanding public website and "back-office" internal configuration website. For the performance and scalability + concurrency issues, we've quite often designed public website that could be load balanced as a one project, and back-office website that would be only hosted once as a second project - instead of using areas.
Again, this is just one approach from the industry, not necessarily the optimal approach.

MVC - base solution Vs Branched source control

I'm planning an MVC application of which there will be two variants; one for the US and one for Europe. I can't foresee a 3rd (or nth) deployment ever happening.
The two applications will both share almost identical functionality but with some (reasonably) small variations in the Model, View and Controller.
We'll be using Entity Framework with a database-first approach.
The two options I see are:
Use a base MVC solution, alongside a solution for the specifics of each deployment - extended base Models, Controller event handlers, some carefully considered partial views and bundled CSS & JS.
Use a single solution for the whole project but use two version control (SVN) branches for the separate deployments
Which of these is the 'proper' approach for this type of project? Or is there a third option?
UPDATE: One alternative solution which has been pointed out to me, would be to actually make this one single application hosted on Azure/AWS, and with some conditional logic depending on whether the request is made from the US or EU host-header.
Option 2 will make it harder to apply enhancement. You must apply it in 2 places. It will be worse if either branch is incompatible so some adjustment / modification is needed. It will be useful if the differences between environment is big.
Option 1 is better. Please note that you will need good plan to design partial CSS / javascript code. However, with this design you will face with code duplication (which is also happen in option 2). Consider this code:
public void DoSomething(){
// retrieve data
// specific code for EU / NA
// save data
}
This can lead to duplication in retrieve data and save data.
There is some trick to handle this but I think the cleanest way is to use Dependency Injection. With DI and decent DI Container (I'm inexperienced in configuring DI Container, so I cannot give you some suggestion which one is good), you will have benefit:
Can handle duplicated code such as example above
Can define some profiles for easier configuration and wiring, making the maintenance easier
Testable

Code re-use between Grails project - keeping it DRY

The Grails framework has a lot of constructs/features that allows for adhering to the DRY principle ("don't repeat yourself") within a project. That is: within a specific project you're seldom required to repeat identical blocks of settings or code. So far so good.
However, the more I've worked with Grails the more of I've observed that I repeat code not within the same project but between projects. That is project A has controllers, GSP:s and images that overlaps with project B. This is a maintenance nightmare since bug fixes in project A must also be fixed in project B, etc.
I'd like to take DRY to the next level by not duplicating code between my projects.
My question: How do you tackle this problem (violated "inter-projects DRY") in your own internal Grails projects?
Please be very specific/concrete. If possible try to include specific code examples on how you solve it in practice.
Writing a custom plugin is the best way. You don't need to release it to the public repository, as you can use a private repository somewhere within your own network.
I haven't had enough duplicated code yet to pull out a plugin (most of the code repeated in my projects seem to be covered by the various public plugins), but a plugin can be as simple as a few common domain classes or services.
I agree with Lee, Using common/shared plugins is probably the best way to go. At one place that I worked we had quite a few internal plugins for this very reason.
The most common pattern is to put your common domain objects into their own plugin. This works really well for domain classes or services. We didn't end up refactoring the controllers, views, and static resources into a plugin, but the same principle should apply.
Long story short: Reuse of Grails artifacts = use a plugin.
To add to Lee and Colin's points, which are both valid, I think thinking in terms of multiple plugins can yield other benefits.
For example, you can split up your application functinality into multiple pieces, and have different people work on them. Or it can yield results during deployment, if, say, you need to have two layers of access to an app - user-level and admin - if your domain model is in a separate plugin, as Colin suggested, you can easily build two applications and deploy them separately.
For my app, I have several plugins specific to my project - domain classes plugin, one that is a bunch of code for importing data (which I can run easily against my site), some other plugins for graphing and customization of scaffolding. It takes a bit more thinking, but I expect this factoring will yield dividends in the future as we bring on more people to the team.

When do you use dependency injection?

I've been using StructureMap recently and have enjoyed the experience thoroughly. However, I can see how one can easily get carried away with interfacing everything out and end up with classes that take in a boatload of interfaces into their constructors. Even though that really isn't a huge problem when you're using a dependency injection framework, it still feels that there are certain properties that really don't need to be interfaced out just for the sake of interfacing them.
Where do you draw the line on what to interface out vs just adding a property to the class?
The main problem with dependency injection is that, while it gives the appearance of a loosely coupled architecture, it really doesn't.
What you're really doing is moving that coupling from the compile time to the runtime, but still if class A needs some interface B to work, an instance of a class which implements interface B needs still to be provided.
Dependency injection should only be used for the parts of the application that need to be changed dynamically without recompiling the base code.
Uses that I've seen useful for an Inversion of Control pattern:
A plugin architecture. So by making the right entry points you can define the contract for the service that must be provided.
Workflow-like architecture. Where you can connect several components dynamically connecting the output of a component to the input of another one.
Per-client application. Let's say you have various clients which pays for a set of "features" of your project. By using dependency injection you can easily provide just the core components and some "added" components which provide just the features the client have paid.
Translation. Although this is not usually done for translation purposes, you can "inject" different language files as needed by the application. That includes RTL or LTR user interfaces as needed.
Think about your design. DI allows you to change how your code functions via configuration changes. It also allows you to break dependencies between classes so that you can isolate and test objects easier. You have to determine where this makes sense and where it doesn't. There's no pat answer.
A good rule of thumb is that if its too hard to test, you've got some issues with single responsibility and static dependencies. Isolate code that performs a single function into a class and break that static dependency by extracting an interface and using a DI framework to inject the correct instance at runtime. By doing this, you make it trivial to test the two parts separately.
Dependency injection should only be used for the parts of the
application that need to be changed dynamically without recompiling
the base code
DI should be used to isolate your code from external resources (databases, webservices, xml files, plugin architecture). The amount of time it would take to test your logic in code would almost be prohibitive at a lot of companies if you are testing components that DEPEND on a database.
In most applications the database isn't going to change dynamically (although it could) but generally speaking it's almost always good practice to NOT bind your application to a particular external resource. The amount involve in changing resources should be low (data access classes should rarely have a cyclomatic complexity above one in it's methods).
What do you mean by "just adding a property to a class?"
My rule of thumb is to make the class unit testable. If your class relies on the implementation details of another class, that needs to be refactored/abstracted to the point that the classes can be tested in isolation.
EDIT: You mention a boatload of interfaces in the constructor. I would advise using setters/getters instead. I find that it makes things much easier to maintain in the long run.
I do it only when it helps with separation of concerns.
Like maybe cross-project I would provide an interface for implementers in one of my library project and the implementing project would inject whatever specific implementation they want in.
But that's about it... all the other cases it'd just make the system unnecessarily complex
Even with all the facts and processes in the world.. every decision boils down to a judgment call - Forgot where I read that
I think it's more of a experience / flight time call.
Basically if you see the dependency as a candidate object that may be replaced in the near future, use dependency injection. If I see 'classA and its dependencies' as one block for substitution, then I probably won't use DI for A's deps.
The biggest benefit is that it will help you understand or even uncover the architecture of your application. You'll be able to see very clearly how your dependency chains work and be able to make changes to individual parts without requiring you to change things that are unrelated. You'll end up with a loosely coupled application. This will push you into a better design and you'll be surprised when you can keep making improvements because your design will help you keep separating and organizing code going forward. It can also facilitate unit testing because you now have a natural way to substitute implementations of particular interfaces.
There are some applications that are just throwaway but if there's a doubt I would go ahead and create the interfaces. After some practice it's not much of a burden.
Another item I wrestle with is where should I use dependency injection? Where do you take your dependency on StructureMap? Only in the startup application? Does that mean all the implementations have to be handed all the way down from the top-most layer to the bottom-most layer?
I use Castle Windsor/Microkernel, I have no experience with anything else but I like it a lot.
As for how do you decide what to inject? So far the following rule of thumb has served me well: If the class is so simple that it doesn't need unit tests, you can feel free to instantiate it in class, otherwise you probably want to have a dependency through the constructor.
As for whether you should create an interface vs just making your methods and properties virtual I think you should go the interface route either if you either a) can see the class have some level of reusability in a different application (i.e. a logger) or b) if either because of the amount of constructor parameters or because there is a significant amount of logic in the constructor, the class is otherwise difficult to mock.

Resources