How to Separate Existing Project using Micro Service Architecture? - asp.net-mvc

I have created one project in asp.net MVC for banking sector.
At the time of development,We did not think about scaling.So we did not used any kind of modular architecture.
But now I am facing lots of problem regarding deployment and development of new feature.
If I develop any new feature then currently I have to release entire project as there is only one namespeace - projectmain.dll.
Now I want to separate all controllers in seperate projects and I want to deploy it seperatly and using MicroService architecture I want to use this in main project.
Note
My Views are tightly coupled with controller.
So How can I migrate this entire controllers in MicroService Architecture?
Project Structure Explanation

"How to migrate to microservices" needs a very very long answer that can be a book. I can however, give you some directions that could help you.
First of all, you could follow the DDD approach to help you to correctly identify the bounded context of your application. In an ideal situation, every domain should correspond to a bounded context. So you could split by bounded context. Every bounded context could have one or more microservices, but a microservice should not be larger than a bounded context. From what I see, your controllers are already split by domain or at least by subdomain. You could try to make a microservice for each controller and see how it goes; it is possible to further split, i.e. by the Aggregate from the DDD (every Aggregate type could be a microservice).
Second, you should have separate projects for each microservice. Ideally, if you want resilience and scalability, a microservice should not call other microservice during an external request. This means that when a microservice receive a request from the clients, it should not call another microservice; for this it should have all the needed data from the other microservices already to its local storage (using background tasks, with synchronous or asynchronous calls).
Last but not least, each microservice should have its own database; they should not share the same database or table.

In this situation, I guess one option would be to convert your controllers into a Web API with each action method returning JSON data instead of returning a view to render. This would now be treated as a separate back end which could be bundled as one micro service. The next step would be to convert your razor views into a purely front end microservice (using a framework like angular or react) with 2 layers-
A service layer responsible for making API calls that fetch the JSON data returned by your Web API
A component layer responsible for mapping the JSON data fetched by your service to your UI elements/forms

Related

Using Web Services with EF Code First approach

I have developed some MVC applications by using Entity Framework code first approach and now I am developing a new application that will also use web services for mobile applications that we will create. So, I have trouble about the issues below. Could you clarify me please one by one regarding to the issues?
Which web service technology should I use i.e. Web API, WCF, etc? (I am using MVC5 and EF version 6 in my project)
Can I use the same CRUD methods for my web application and for web services? If so, which modifications should be made on the methods and on the other fields i.e. models, etc?
For a current MVC application where EF code first approach was used, is it better to create a new methods for web services or should the current methods be updated by adding ability to support also web services?
Thanks in advance...
I highly recommend to use Commands and Queries. It's covered in this and this articles.
The Command is simple DTO object, and it could be easily sent over the network. In this case you have control over the fields and behaviour you want to make public.
Because commands are simple data containers without behavior, it is
very easy to serialize them (using the XmlSerializer for instance) or
send them over the wire (using WCF for instance), which makes it not
only easy to queue them for later processing, but ot also makes it
very easy to log them in an audit trail- yet another reason to
separate data and behavior. All these features can be added, without
changing a single line of code in the application (except perhaps a
line at the start-up of the application).

should i expose repository to controller when using service?

I currently developing asp mvc using unit of work, generic repository and service patterns. I get a little confused on how the design should work in the controller.
Should i expose the repository to the controller? or should the controller know only the service?
The controller need repository to retrieve entities for the combobox list.
The problem to exposing repository is that it has save and delete method and should only be called by the service.
can someone help me on this problem?
The repository pattern is used to abstract away the data source.
There is no need to abstract away the abstraction. So I would use the repository.
However, as soon as you start to get business logic inside the presentation layer you should extract that and put it in a service.
It depends on the requirement for any business logic in front of the persistence layer (Repository). The moment you have any, it will end up in the Controller. This might be acceptable for a very simple system, but the moment you expand (REST APIs, Admin/Support app, Offline/Batch Processing) the system beyond the toy state, you will want to push that logic to a shared tier, indeed, the Service layer. Just put in the abstraction now, and save yourself some refactoring later. This also keeps the Controllers in the business they were meant to be in, directing traffic and calling services to get/put data.

Why should I use WCF with MVC?

I'm testing ASP.NET MVC 3.
And for an ajax call, I can use a MVC controller Method, or a WCF service.
But why would I use WCF, if I can do it with MVC ?
My questions is : Should I Use WCF services with MVC, or not ? And Why ? And in which case ?
Thanks,
WCF is a framework used for developing web services. The idea of a web service is that it decouples functionality used for providing raw data from functionality used for treating the data and providing it to the end-user. There are a few advantages to this:
If you are providing an API, you don't know how the data will be used. You want to simply provide raw data to a set of applications and let those applications handle the rest. A web service does just that... it opens up the data layer of an application while leaving the rest closed.
It can improve data-layer maintenability by enforcing loose coupling. Loose coupling means that the components of your application are not entwined with one another. This is a good thing because it makes it easier to make modifications to parts of your application without disrupting the rest. For example, if it is understood that a given function call will return a set JSON object, you can make changes to the table structure of the database that provides data for that object without interfering with the code of the consuming application. This works so long as you uphold the predefined data contract by always supplying the same type of data in the same format. On the other hand, if database queries, connection strings and the like are all hardcoded into your application it makes modifying your database logic significantly more difficult.
In your case, if you are just developing a small to medium-sized web application and have no intention of launching an API or similar service, there is probably no need for WCF.
Keep in mind however that while you probably don't need to write a WCF service for your application, you should still try to loosely-couple your application layers as you would with a service. You can do this by splitting data-access code or object (entity) definition code out into separate projecs. Loose coupling, whether it is implemented with WCF or just MVC makes maintaining your project simpler, easier and more affordable and is overall a very good practice to abide by.
MVC is fine, you really don't need WCF for this. MVC creates some sort of REST API (all your action methods get their own URL that you can use to call the method), so you can use that.

Where to put Entity Framework Data Model in MVC application? Specific example

First I want to refer to this post:
Where to put Entity Framework Data Model in MVC application?
My edmx will have 7-10 tables in it. Not more.
The problem is I have to build my model which I´m working with out of [lets say] 4 tables.
So I´m asking myself: Are these tables real model representations and would it be correct to put the edmx file in the "Models" folder and how should I name this CONTAINER of models?
Or are 10 tables enough to create a new project? How to call the project? .DataAccess? How to name the edmx file in it?
I don´t have that much experience with MVC and EF and am trying to figure out a best practice there.
Update: This post tells me not to put it in the Models folder: "The model should be decoupled from the backend data store technology as much as possible."
Personally my MVC projects (regardless of size) consist of the following as a minimum:
Data
Logic
Site
This structure seems to work pretty well as it separates business logic from storage and display.
You definitally don't want to put the EDMX in the models folder as that is reserved for view models. Best practice says that view models should be entirely disconnected from your storage entities.
In terms of naming the EDMX i normally name it after the short name of the project, the more important thing is to get the namespace right for the EDMX so your models sit in the correct namespace location.
My response is based on Silverlight and I understand it's a bit out of context because you are asking from MVC view point. But please allow me to illustrate where I put my EDMX
First project solution
-Widgets. These are multiple UI projects with multiple XAML pages
-UI logic is heavy orchestrating every widget and XAML pages in one main user interface
-View-Models. These are almost equivalent to controllers in MVC. I use XAML to directly bind to View-Models. Example QuotationItemModel.vb and xyz.vb and such. Multiple XAML pages may share 1 VM.
-XAML pages suppose to use command bindings as per implementating View-Models. Example button click is routed to VM. I didn't achieve this because the UI coordination logic (from another UI architect) was interfering with my hooking to delegate command
(of CanExecute, Execute Func(Of Object, Boolean) Action(Of Object) causing a stack overflow in first level widgets' click event.)
-Model. There is but one function here. Her job hooks a delegate to web service async call completed event and then triggers the webservice.
Deletegate's implementation actually sits back into in View-Model i.e. QuotationItemModel.vb and not inside Model. There is truly only one function in Model.vb
-There is no other logic in Model. i.e. Model.vb decides end points, http bindings, WCF stuffs
-There is no EDMX whatsoever in this solution. Model also knows nothing about database.
Second project (but inside third solution)
WCF implementation. Light weight. Again 1 function. Operation contracts only.
Code behind only pass business objects into third project.
Connection string for EDMX is configured here and passed to third project.
No other logic.
There is no awareness of EDMX whatsoever
Third project solution
-Begins with a simple factory to delegate logic and invoke classes
-Begins with simple factory logic becomes very heavy backend. Uses design patterns to alleviate maintenance concerns. From here, the patterns could criss cross between commands, strategy, or abstract types etc etc.
-The EDMX design is fully apparent in this layer
-Business objects interacts in logical manner with EDMX
-I either do LINQ to Entities or parameterized queries here
-This layer consist of business logic such as Underwriting ID must exist before a claim transaction can be issued. Or a quotation's running number sequence based on server date. etc etc
-There are some manual mapping of business objects to Entities. Potentially tedious but not always
-Result is passed back as XML
The third project could very well be separated solution with another lightweight webservice in between, producing readiness for 3 tier architecture. Then I will produce my own connection string to EDMX at this pure layer. But mine is now more like '2.5' layer 2 architecture. I sheepishly expose the connection string in middle tier's web config.
Architecture means having another hardware platform altogether. Layer are separation for domain driven design in problem space i.e. UI, communication and business domains. Technically speaking the database of SQL Server (beyond the EDMX) could very well sit in another architecture i.e. Windows Azure
There are pros and cons I see here. Please bring any criticisms gently, I am new to layering, really.
Cons
Without exposing data contracts my UI is blind when communicating in language of business objects and contracts. Previously this was easily achieved by having the EDMX in WCF layer.
I now used Xelement to represent shared business object. But I still need to figure a way to expose the data contract without exposing database internals. Currently, I 'instinctively' know and code the database fields in my Xelements.
Potentially it's like silent binding to backend EDMX. Silence is sometimes bad because if I get a column without data there are many suspected causes. Nothing that cannot be solved via good error messaging from the XML result passed-back. Using my imagination.
Weak mechanism for versioning. Perhaps new clients interacts with separate operation contract for a silent redirection to Backend-Ver 2.0 whilst the existing clients utilize Backend-Ver 1.0. This potentially mean you should now have 2 EDMX for each old and new database respectively
Pros
Extreme decoupling. I can delete/rebuild the EDMX and UI and WCF still compiles. Only my third solution will get compilation error in this extreme test effort.
From silverlight UI, triggering and communication to Microsoft Report Viewer report shares exactly same classes invoked from UI. There are no 'additional webservice function for report' whatsoever. Whatever EDMX + logic requested by UI exactly same for the report-- unless I chose it not.
PS: Silverlight communicates filter criteria to the report via query string.
The report again, is not aware of the EDMX. Example, if I delete the EDMX from backend and then update the data connection from report project and the report project still compiles without problems.
Readiness for migration to multiple architecture without tears. Seasonal load balancing, increase in customer base etc may trigger this investment in architecture.
Reusability of business logic. For example, if the boss gets tired of Silverlight, I just need to re-code the UI business objects, say, into JSON?? under HTML 5. There are no changes to business logic whatsoever, except new requirements. Example, to expand into Life Insurance to co-exist with existing General insurance, a module which is currently coded in Silverlight. Imagine Life Insurance in HTML 5 and still coexisting with same backend. Again, the reason is because both front end is not aware of EDMX I just need to focus on building data contract from within new technology.
Unexpected (I am new to layering really!) side effect. I potentially can test my backend separate from UI. Which in turn manipulate LINQ to Entity (that EDMX). Cool for unit testing.
Updating business logic does not effect new deployment to IIS (Middle layer) except maybe when it comes to versioning.
Anyway here's Layered Application Solution Guidance from talented software architect Serena Yeoh
Layered Architecture Sample for .NET
http://layersample.codeplex.com/
http://layerguidance.codeplex.com/
Notice in the sample which you download, the ingenuity of having multiple UI over different technologies over a common backend, where the EDMX live and sleep. And what's more, over windows workflow foundation, selectively called as needed. You can see where Serena put the EDMX and you have at it with workable running code. Pure bliss.

Who is responsible for parsing? The controller or service layer?

The project I'm currently working on has a Core API which is used by everything: services, web, ...
This API has following layers:
Core
Core.Models
Core.DataProviders
Core.DataProviders.LinqToSql
Core.Utils
On top of this API is my ASP.NET MVC application. This looks like this:
Web
Web.Models (Some Web specific objects and logic. For example a class which builds a list of quarters to help me render a day in a scheduling table.)
Web.Extensions (Html Helpers, Controller base..)
Web.ViewModels (Composite objects to pass to the View.)
Web.Services (Layer which communicates with the Core and Web.Models. This layer builds ViewModels for my Controllers. Helps keeping my Controllers clean.)
Any serious flaws in this setup?
A more specific question: I need to parse some things coming from my View before I can pass them to the Core. Should I handle this in the Controller or in the Service layer?
Generally speaking data submitted from the view should be parsed by a ModelBinder, falling back to the Controller if using a ModelBinder doesn't seem to make sense.
Parsing in an application service makes sense if multiple sources can submit the data in the same format (like web service or file system persistance).

Resources