I'm planning an MVC application of which there will be two variants; one for the US and one for Europe. I can't foresee a 3rd (or nth) deployment ever happening.
The two applications will both share almost identical functionality but with some (reasonably) small variations in the Model, View and Controller.
We'll be using Entity Framework with a database-first approach.
The two options I see are:
Use a base MVC solution, alongside a solution for the specifics of each deployment - extended base Models, Controller event handlers, some carefully considered partial views and bundled CSS & JS.
Use a single solution for the whole project but use two version control (SVN) branches for the separate deployments
Which of these is the 'proper' approach for this type of project? Or is there a third option?
UPDATE: One alternative solution which has been pointed out to me, would be to actually make this one single application hosted on Azure/AWS, and with some conditional logic depending on whether the request is made from the US or EU host-header.
Option 2 will make it harder to apply enhancement. You must apply it in 2 places. It will be worse if either branch is incompatible so some adjustment / modification is needed. It will be useful if the differences between environment is big.
Option 1 is better. Please note that you will need good plan to design partial CSS / javascript code. However, with this design you will face with code duplication (which is also happen in option 2). Consider this code:
public void DoSomething(){
// retrieve data
// specific code for EU / NA
// save data
}
This can lead to duplication in retrieve data and save data.
There is some trick to handle this but I think the cleanest way is to use Dependency Injection. With DI and decent DI Container (I'm inexperienced in configuring DI Container, so I cannot give you some suggestion which one is good), you will have benefit:
Can handle duplicated code such as example above
Can define some profiles for easier configuration and wiring, making the maintenance easier
Testable
Related
I'm starting a new MVC project and have (almost) decided to give the Repository Pattern and Dependency Injection a go. It has taken a while to sift through the variations but I came up with the following structure for my application:
Presentation Layer: ASP.Net MVC front end (views/controllers, etc.)
Services Layer (Business Layer, if you prefer): interfaces and DTOs.
Data Layer: interface implementations and Entity Framework classes.
They are 3 separate projects in my solution. The Presentation Layer only has a reference to the Services Layer. The Data Layer also only has a reference to the Services Layer - so this is basically following Domain Driven Design.
The point of structuring things in this fashion is for separation of concerns, loose-coupling and testability. I'm happy to take advice on improvements if any of this is unreasonable?
The part I am having difficulty with is injecting an interface-implementing object from the Data Layer into the Presentation Layer, which is only aware of the interfaces in the Services Layer. This seems to be exactly what DI is for, and IoC frameworks (allegedly!) make this easier, so I thought I'd try MEF2. But of the dozens of articles and questions and answers I've read over the last few days, nothing seems to actually address this in a way that fits my structure. Almost all are deprecated and/or are simple console application examples that have all the interfaces and classes in the same assembly, knowing all about one another and entirely defying the point of loose-coupling and DI. I have also seen others that require the Data Layer dll being put in the presentation layer bin folder and configuring other classes to look there - again hampering the idea of loose-coupling.
There are some solutions that explore attribute-based registration, but that has supposedly been superseded by Convention-Based registration. I also see a lot of examples injecting an object into a controller constructor, which introduces it's own set of problems to solve. I'm not convinced the controller should know about this actually, and would rather have the object injected into the model, but there may be reasons for this as so many examples seem to follow that path. I haven't looked too deeply into this yet as I'm still stuck trying to get the Data Layer object up into the Presentation Layer anywhere at all.
I believe one of my main problems is not understanding in which layer the various MEF2 things need to go, since every example I've found only uses one layer. There are containers and registrations and catalogues and exporting and importing configurations, and I've been unable to figure out exactly where all this code should go.
The irony is that modern design patterns are supposed to abstract complexity and simplify our task, but I'd be half finished by now if I'd have just referenced the DAL from the PL and got to work on the actual functionality of the application. I'd really appreciate it if someone could say, 'Yep, I get what you're doing but you're missing xyz. What you need to do is abc'.
Thanks.
Yep, I get what you're doing (more or less) but (as far as I can tell) you're missing a) the separation of contracts and implementation types into their own projects/assemblies and b) a concept for configuring the DI-container, i.e. configure which implementations shall be used for the interfaces.
There are unlimited ways of dealing with this, so what I give you is my personal best practice. I've been working that way for quite a bit now and am still happy with it, so I consider it worth sharing.
a. Always have to projects: MyNamespace.Something and MyNamespace.Something.Contracts
In general, for DI, I have two assemblies: One for contracts which holds only interfaces and one for the implementation of these interfaces. In your case, I would probably have five assemblies: Presentation.dll, Services.dll, Services.Contracts.dll, DataAccess.dll and DataAccess.Contracts.dll.
(Another valid option is to put all contracts in one assembly, lets call it Commons.dll)
Obviously, DataAccess.dll references DataAccess.Contracts.dll, as the classes inside DataAccess.dll implement the interfaces inside DataAccess.Contracts.dll. Same for Services.dll and Services.Contracts.dll.
No, the decoupling part: Presentation references Services.Contracts and Data.Contracts. Services references Data.Contracts. As you see, there is no dependency to concrete implementations. This is, what the whole DI thing is about. If you decide to exchange your data access layer, you can swap DataAccess.dll while DataAccess.Contracts.dll stays the same. None of your othe assemblies reference DataAccess.dll directly, so there are no broken links, version conflicts, etc. If this is not clear, try to draw a little dependency diagram. You will see, that there are no arrows pointing to any assemblies whioch don't have .Contracts in their name.
Does this make sense to you? Please ask, if there is something unclear.
b. Choose how to configure the container
You can choose between explicit configuration (XML, etc.), attribute based configuration and convention based registration. While the former is a pain for obvious reasons, I am a fan of the second. I think it is more readable and easy to debug than convention based config, but that is a matter of taste.
Of course, the container kind of bundles all the dependencies, which you have spared in your application architecture. To make clear what I mean, consider a XML config for your case: It will contain 'links' to all of the implementation assemblies DataAccess.dll, .... Still, this doesn't undermine the idea of decoupling. It is clear, that you need to modify the configuration, when an implementation assembly is exchanged.
However, working with attribute or convention based configs, you generally work with the autodiscovery mechanisms you mention: 'Search in all assemblies located in xyz'. This does require to place all assemblies in the applications bin directory. There is nothing wrong about it, as the code needs to be somewhere, right?
What do you gain? Consider you've deployed your application and decide to swap the DataAccess layer. Say, you've chosen convention based config of your DI container. What you can do now is to open a new project in VS, reference the existing DataAccess.Contracts.dll and implement all the interfaces in whatever way you like, as long as you follow the conventions. Then you build the library, call it DataAccess.dll and copy and paste it to your original application's program folder, replacing the old DataAccess.dll. Done, you've swapped the whole implementation without any of the other assemblies even noticing.
I think, you get the idea. It really is a tradeoff, using IoC and DI. I highly recommend to be pragmatic in your design decisions. Don't interface everything, it just gets messy. Decide for yourself, where DI and IoC really makes sense and don't get too influenced by the community's religious discussions. Still, used wisely, IoC and DI are really, really, really powerful!
Well I've spent a couple more days on this (which is now around a week in total) and made little further progress. I am fairly sure I had the container set up correctly with my conventions discovering the correct parts to be mapped etc., but I couldn't figure out what seemed to be the missing link to get the controller DI to activate - I constantly received the error message stating that I hadn't provided a parameterless constructor. So I'm done with it.
I did, however, manage to move forward with my structure and intention to use DI with an IoC. If anyone hits the same wall I did and wants an alternative solution: ditch MEF 2 and go with Unity. The latest version (3.5 at time of writing) has discovery by convention baked in and just works like a treat out of the box - it even has a fairly thorough manual with worked examples. There are other IoC frameworks, but I chose Unity since it's MS supported and fares well in performance benchmarks. Install the bootstrapper package from NuGet and most of the work is done for you. In the end I only had to write one line of code to map my entire DAL (they even create a stub for you so you know where to insert it):
container.RegisterTypes(
AllClasses.FromLoadedAssemblies().Where(t => t.Namespace == "xxx.DAL.Repository"),
WithMappings.FromMatchingInterface,
WithName.Default);
I'm building a MVC4 app, I've used EF5 model first, and kept it pretty simple. This isn't going to a huge application, there will only ever be 4 or 5 people on it at once and all users will be authenticated before being able to access any part of the application, it's very simply a place order - dispatcher sees order - dispatcher compeletes order sort of application.
Basically my question is do I need to be worrying about repositories and ViewModels if the size and scope of my application is so small. Any view that is strongly typed to a domain entity is using all of the properties within that entity. I'm using TryOrUpdateModel in my controllers and have read some things saying this can cause a lot of problems, but not a lot of information on exactly what those problems can be. I don't want to use an incredibly complicated pattern for a very simple app.
Hopefully I've given enough detail, if anyone wants to see my code just ask, I'm really at a roadblock here though, and could really use some advice from the community. Thanks so much!
ViewModels: Yes
I only see bad points when passing an EF Entities directly to a view:
You need to do manual whitelisting or blacklisting to prevent over-posting and mass assignment
It becomes very easy to accidentally lazy load extra data from your view, resulting in select N+1 problems
In my personal opinion, a model should closely resembly the information displayed on the view and in most cases (except for basic CRUD stuff), a view contains information from more than one Entity
Repositories: No
The Entity Framework DbContext already is an implementation of the Repository and Unit of Work patterns. If you want everything to be testable, just test against a separate database. If you want to make things loosely coupled, there are ways to do that with EF without using repositories too. To be honest, I really don't understand the popularity of custom repositories.
In my experience, the requirements on a software solution tend to evolve over time well beyond the initial requirement set.
By following architectural best practices now, you will be much better able to accommodate changes to the solution over its entire lifetime.
The Respository pattern and ViewModels are both powerful, and not very difficult or time consuming to implement. I would suggest using them even for small projects.
Yes, you still want to use a repository and view models. Both of these tools allow you to place code in one place instead of all over the place and will save you time. More than likely, it will save you copy paste errors too.
Moreover, having these tools in place will allow you to make expansions to the system easier in the future, instead of having to pour through all of the code which will have poor readability.
Separating your concerns will lead to less code overall, a more efficient system, and smaller controllers / code sections. View models and a repository are not heavily intrusive to implement. It is not like you are going to implement a controller factory or dependency injection.
I’m writing an application with Ruby on Rails. This application will be delivered to a minimum of two different customer types. The base is always the same but some of the views differ. That’s mostly it. For now.
Is there any way, for example using branches, to use the same code base and separate only the views, for example? I read the Git manual for branching but am still not sure if this is the best way to achieve what I need.
Another idea would be forking. But, is that clever? If I change something in the code of fork A, is it easy to merge these changes into fork B?
Branching and forking in git is not bad at all, as the merge support is great (possible the best of al VCMs).
Personally, I don't like the idea of branching or forking a project to provide different customization as it can very quickly become really difficult, e.g. what are you going to do if you have 15 different deployments?
I think a better approach is to build the application so it behaves differently depending on some parameters. I'm well aware that sometimes the implementations are very different, so this might not be useful anymore.
Another approach is to build the core of your app in a GEM which acts as a service to the application, and the only thing you customize per client are the views. Of course, the GEM should be generic enough to provide all the services you need.
Please share with us what you decided, as there's no best answer for your question.
It would probably be better to make you product select between the types at either build or runtime, that way you can use a single set of source.
Otherwise it is possible with branches, and merging, but you'll have more difficulty managing things. Forking is basically branching at this level.
I agree with #Augusto. You could run your app in 2 different environments, ie production_A and production_B. From there, you could use SettingsLogic to define configurations based on Rails.env, then reference those settings in your app when selecting which view to use for example.
The Grails framework has a lot of constructs/features that allows for adhering to the DRY principle ("don't repeat yourself") within a project. That is: within a specific project you're seldom required to repeat identical blocks of settings or code. So far so good.
However, the more I've worked with Grails the more of I've observed that I repeat code not within the same project but between projects. That is project A has controllers, GSP:s and images that overlaps with project B. This is a maintenance nightmare since bug fixes in project A must also be fixed in project B, etc.
I'd like to take DRY to the next level by not duplicating code between my projects.
My question: How do you tackle this problem (violated "inter-projects DRY") in your own internal Grails projects?
Please be very specific/concrete. If possible try to include specific code examples on how you solve it in practice.
Writing a custom plugin is the best way. You don't need to release it to the public repository, as you can use a private repository somewhere within your own network.
I haven't had enough duplicated code yet to pull out a plugin (most of the code repeated in my projects seem to be covered by the various public plugins), but a plugin can be as simple as a few common domain classes or services.
I agree with Lee, Using common/shared plugins is probably the best way to go. At one place that I worked we had quite a few internal plugins for this very reason.
The most common pattern is to put your common domain objects into their own plugin. This works really well for domain classes or services. We didn't end up refactoring the controllers, views, and static resources into a plugin, but the same principle should apply.
Long story short: Reuse of Grails artifacts = use a plugin.
To add to Lee and Colin's points, which are both valid, I think thinking in terms of multiple plugins can yield other benefits.
For example, you can split up your application functinality into multiple pieces, and have different people work on them. Or it can yield results during deployment, if, say, you need to have two layers of access to an app - user-level and admin - if your domain model is in a separate plugin, as Colin suggested, you can easily build two applications and deploy them separately.
For my app, I have several plugins specific to my project - domain classes plugin, one that is a bunch of code for importing data (which I can run easily against my site), some other plugins for graphing and customization of scaffolding. It takes a bit more thinking, but I expect this factoring will yield dividends in the future as we bring on more people to the team.
What are common reasons to split a development project (e.g. ASP.NET MVC application) into multiple projects? Code organization can be done via folders just as well. Multiple projects tend to generate circular reference conflicts and increase complexity by having to manage/resolve those.
So, why?
Some reasons are
Encapsulation - By packaging a set of routines into another library, either as a static library or a set of dlls, it becomes a black box. For it to be a good black box, all you need to do is to make sure you give the right inputs and get the right outputs. It helps when you re-use that library. It also enforces certain rules and prevent programming by hacks ('hmm...I'll just make that member function public for now')
Reduces compile time - the library is already complied; you don't have to rebuild it at compile time, just link to it (assuming you are doing C++).
Decoupling - By encasing your classes into a standalone libraries, you can reduce coupling and allows you to reuse the library for other purpose. Likewise, as long as the interface of the library does not change, you can make changes to the library all you like, and others who link to it or refer to it does not need to change their code at all. DLLs are useful in this aspect that no re-compilation is required, but can be tricky to work with if many applications install different versions of the same DLLs. You can update libraries without impacting the client's code. While you can do the same with just folders, there is no explicit mechanism to force this behaviour.
Also, by practicing this discipline of having different libraries, you can also make sure what you have written is generic and decoupled from implementation.
Licensing/Commercialization - Well, I think this is quite obvious.
One possibility is to have a system that a given group (or single developer) can work on independently of the rest of the code. Another is to factor out common utility code that the rest of the system needs -- things like error handling, logging, and common utilities come to mind.
Of course, just when thinking about what goes in a particular function / class / file, where the boundaries are is a matter of art, not science.
One example I can think of is that you might find in developing one project that you end up developing a library which may be of more general use and which deserves to be its own project. For instance maybe you're working on a video game, and you end up writing an audio library that's in no way tied specifically to the game project.
Code reuse. Let's say you have project A and you start a new project B which has many of the same functions as project A. It makes sense to pull out the shared parts of A and make them into a library which can be used by both A and B. This allows you to have the code in both without having to maintain the same code in two places.
Code reuse, inverted. Let's say you have a project which works on one platform. Now you want it to work on two platforms. If you can separate out the platform-dependent code, you can start different projects for each platform-dependent library and then compile your central codebase with different libraries for different platforms.
Some tips about split your project into multiple projects:
One reason for separating a project into multiple class libraries is re-usability. I’ve yet to see the BLL or DAL part of application re-used in another application. This is what textbooks from the 90s used to tell us! But most if not all modern applications are too specific and even in the same enterprise, I’ve never seen the same BLL or DAL parts re-used across multiple applications. Most of the time what you have in those class libraries is purely to serve what the user sees in that particular application, and it’s not something that can be easily re-used (if at all).
Another reason for separating a project into multiple class libraries is about deployability. If you want to independently version and deploy these pieces, it does make sense to go down this path. But this is often a use case for frameworks, not enterprise applications. Entity Framework is a good example. It’s composed of multiple assemblies each focusing on different areas of functionality. We have one core assembly which includes the main artifacts, we have another assembly for talking to a SQL Server database, another one for SQLite and so on. With this modular architecture, we can reference and download only the parts that we need.
Imagine if Entity Framework was only one assembly! It would be one gigantic assembly with lots of code that we won’t need. Also, every time the support team added a new feature or fixed a bug, the entire monolithic assembly would have to be compiled and deployed. This would make this assembly very fragile. If we’re using Entity Framework on top of SQL Server, why should an upgrade because of a bug fix for SQLite impact our application? It shouldn’t! That’s why it’s designed in a modular way.
In most web applications out there, we version and deploy all these assemblies (Web, BLL and DAL) together. So, separating a project into 3 projects do not add any values.
Layers are conceptual. They don’t have a physical representation in
code. Having a folder or an assembly called BLL or DAL doesn’t mean
you have properly layered your application, neither does it mean you
have improved maintainability. Maintainability is about clean code,
small methods, small classes each having a single responsibility and
limited coupling between these classes. Splitting a project with fat
classes and fat methods into BLL/DAL projects doesn’t improve the
maintainability of your software. Assemblies are units of versioning
and deployment. Split a project into multiple projects if you want to
re-use certain parts of that in other projects, or if you want to
independently version and deploy each project.
Source: https://programmingwithmosh.com/csharp/should-you-split-your-asp-net-mvc-project-into-multiple-projects/
Ownership for one thing. If you have developers responsible for different parts of the code base then splitting the project up is the natural thing to do. One would also split projects by functionality. This reduces conflicts and complexity. If it increases, that just means a lack of communication and you are just doing it wrong.
Instead of questioning the value of code in multiple assemblies, question the value of clumping all of your code in one place.
Would you put everything in your kitchen in a single cabinet?
Circular references are circular references, whether they happen between assemblies or within them. The design of the offending components is most likely sub-optimal; eschewing organization via assemblies ironically prevents the compiler from detecting the situation for you.
I don't understand the statement that you can organize code just as well with folders as with projects. If that were true, our operating systems wouldn't have the concept of separate drives; they would just have one giant folder structure. Higher-order organizational patterns express a different kind of intent than simple folders.
Projects say "These concepts are closely related, and only peripherally related to other concepts."
There are some good answers here so I'll try not to repeat.
One benefit of splitting code out to it's own project is to reuse the assembly across multiple applications.
I liked the functional approach mentioned as well (e.g. Inventory, Shipping, etc. could all get their own projects). Another idea is to consider the deployment model. Code shared between layers, tiers, or servers should probably be in it's own common project (or set of projects if finer control is desired). Code earmarked for a certain tier may be in it's own project. e.g. if you had a separate web server and application server then you wouldn't want to deploy the UI code on the application server.
Another reason to split may be to allow small incremental deploys once the application is in production. Let's say you get an emergency production bug that needs to be fixed. If the small change requires a rebuild of the entire (one project) application you might have a hard time justifying a small test cycle to QA. You might have an easier sell if you were deploying only one assembly with a smaller set of functionality.