ASP.NET MVC models when using external web services - asp.net-mvc

I'm getting started on a new MVC project where there are some peculiar rules and a little bit of strangeness, and it has me puzzled. Specifically, I have access to a database containing all of my data, but it has to be handled entirely through an external web service. Don't ask me why, I don't understand the reasons. That's just how it is.
So the CRUD will be handled via this API. I'm planning on creating a service layer that will wrap up all the calls, but I'm having trouble wrapping my head around the model... To create my model-based domain objects (customers, orders, so on..) should I:
Create them all manually
Create a dummy database and point an ORM at it
Point an ORM at the existing database but ignore the ORM's persistence in lieu of the API.
I feel like I've got all the information I need to build this out, but I'm getting caught up with the API. Any pointers or advice would be greatly appreciated.

Depending on the scale of what you're doing option 3 is dangerous as you're assuming the database model is the same as that exposed by the external service. Options 1 and 2 aren't IMHO much different from each other - in either case you'll have to decide what your objects, properties and behaviours are going to be - it just boils down to whether you're more comfortable doing it in classes or database tables.
The key thing is to make sure that the external service calls are hidden behind some sort of wrapper. Personally I'd then put a repository on top of that to handle querying the external service wrapper and return domain objects.

In general, ORMs are not known for their ability to generate clean domain model classes. ORMs are known for creating data layers, which you don't appear to need in this case.
You could probably use a code generation tool like T4 to code generate a first pass at your domain model classes, based on either the web service or the database, if that would save you time. Otherwise, you probably would just manually create the domain objects. Even if you code generate a first pass at your domain objects, it's unlikely there is a clean 1-1 mapping to your domain objects from either the database or web service, so you will likely need to spend significant time manually editing your code generated domain classes anyway.

Related

Grails Command Objects - What is the motivation behind them?

While reading "The Definitive Guide To Grails", I am a little confused as to Command Objects. They seem to be a wrapper around domain classes to assist with validation but that is functionality already available in domain classes via built in constraints and further via custom validators so then what does a command object do really and what motivates us to need it?
The book starts the discussion on command objects by stating that
"Sometimes a particular action doesn’t require the involvement of a
domain class but still requires the validation of user input."
However, then it demonstrates the declaration of and the usage of a command object with regards to an Album domain class. So, it seems whatever a command object does is still closely related to domain classes. I'm sure my confusion is completely a result of my lack of understanding and so I wish to seek any clarification. Thanks.
They seem to be a wrapper around domain classes...
You can use command objects that way, but that isn't their primary use.
Command objects are useful when you want to encapsulate a group of request parameters and do something with them together. That something might or might not have anything to do with domain classes.
For example, you could have a Grails app which doesn't have any domain classes at all and command objects could still be really helpful. Imagine a Grails app that is just a service layer that receives request from web forms or REST requests with a JSON body or whatever and the Grails app is going to receive those requests, validate the inputs, maybe do some math or anything at all and then make a REST call to some other backend process that might store them in a database or generate reports or whatever. In a situation like that, there are a lot of reasons that you might want to use command objects even though no domain classes are involved at all.
Don't get bound up thinking that command objects have to be tied to domain classes. Sometimes they are, but don't limit your thinking of them to that context. Use command objects when you want to relate a group of request parameters together and do something with them.
I tend to use command objects that match what is happening in the UI layer, form submits can be validated with command objects then passed into services that do the work of persisting them. It many times makes sense to have your domain model be different then the UI flow you are working with.
My domain layer may also have looser constraints than some of the command objects if I want to require certain flows provide enough information.

Workflow with MVC 4 - EF 5 - SimpleMembershipProvider

So I want to build an application with MVC 4 and Entity Framework 5. I've build simple applications before, but now I need some security around my current effort... I have some confusion / questions that I was hoping someone could answer;
First... Using the MVC 4 Internet Application Template it implements SimpleMembershipProvider. I have read every primary article about modification, implementation... However, this uses a Code-First implementation...
Problem: I have an existing database that I would like to import the scheme for to an EDMX database first approach... How do I implement the MVC 4 Simple membership provider when my database ties tightly and directly into the user table (userid)?... I know I can use my own user table as long as i designate the userid and username fields as documented... Will this affect the provider, or the existing "AccountController" code? Will these need to be modified?
Second, what I am looking for is a workflow with this architecture... I am "old school" mostly database first approach... My project is a huge WIP (work in progress). I have a foundation, but will need to expand as needed... Can someone provide some insight into database first vs other approaches when there will be quite a bit of change management occurring?
you can still use Code First to map to an existing database. You may need to explicitly map properties to table columns because the mappings do not follow the default conventions, but that doesn't prevent you from using Code First.
When transitioning from DB first to another mindset. Focus on how the objects interact with each other. then, at some point you will save the state of the objects after they interacted. This is where the ORM comes into play. detects changes and executes the necessary SQL statements to persist the current state of the objects.
Think of the database as just another storage container. In theory it could be replaced by another persistent storage mechanism (document db, file, persistent hash table, in memory list, etc.). In reality it's not that simple, but the idea of treating the DB as just a simple storage container helps to break away from the monolithic database concept that is/was ingrained into most devs.
But don't loose perspective of the design either. If it's a simple forms-over-data app where you will be adding features in the future than keep the design simple. than don't try to totally abstract the DB away. you know it's there and the relationship to the UI is almost 1:1, so take advantage of that.
In it's simplest form separation of concerns can be achieved by using the MVC controller to manage the interaction between the model (mapped to the DB via ORM) and the view (razor templates) my personal preference is to keep ORM out of the views so I typically query the database, map the domain model to a viewmodel and then pass the viewmodel to the view.
Again if it's a simple application and screens map directly to the database than viewmodel are probably overkill.

Domain Driven Design vs Database Driven Design for an MVC Web application

I am expanding/converting a legacy Web Forms application into a totally new MVC application. The expansion is both in terms of technology as well as business use case. The legacy application is a well done Database Driven Design (DBDD). So for e.g. if you have different types of Employees like Operator, Supervisor, Store Keeper etc and you need to add a new type, you just go and add some rows in a couple of tables and voila, your UI automatically has everything to add/update the new type of Employee.
However the seperation of layers is not so good.
The new project has two primary goals
Extensibility (for currently and future pipeline requirements)
Performance
I intend to create the new project replacing the Database Driven Design (DBDD) with a Domain Driven Design (DDD) keeping the Extensibility requirement in mind. However moving from a Database Driven Design to Domain Driven Design seems to inversely impact the Performance requirement if I compare it to the performance of the legacy DBDD application. In the legacy application any call for data from the UI would directly interact with the Database and any data would be returned in form of a DataReader or (in some cases) a DataSet.
Now with a strict DDD in place any call for data will be routed through the Business layer and the Data Access layer. This would mean each call would initialize a Business Object and a Data Access Object. A single UI page could need different types of data and this being a Web application each page could be requested by multiple users. Also a MVC Web application being stateless, each request would need initializing the business objects and data access objects each and every time.
So it seems for an MVC stateless application the DBDD is preferrable to DDD for performance.
Or there a way in DDD to achieve both, Extensibility that DDD provides and performance that DBDD provides ?
Have you considered some form of Command Query Seperation where the updates are through the domain model yet reads come as DataReaders? Full blown DDD is not always appropriate.
"Now with a strict DDD in place any call for data will be routed through the Business layer and the Data Access layer."
I don't believe this is true, and it's certainly not practical. I believe this should read:
Now with strict DDD in place, any call for a transaction will be routed through the business layer and the data access layer.
There is nothing that says you can't call the data access layer directly in order to fetch whatever data you need to display on the screen. It is only when you need to amend data that you need to invoke your domain model that is designed based on its behavior. In my opinion this is a key distinction. If you route everything through your domain model you will have three problems:
Time - it'll take you MUCH longer to implement functionality, for no benefit.
Model Design - your domain model will be bent out of shape in order to meet the needs querying rather than behavior.
Performance - not because of an extra layer, but because you wont be able to get the aggregated data from your model as quickly as you can directly from a query. i.e. Consider the total value of all orders placed for a particular customer - its much faster to write a query for this than to fetch all order entities for the customer, iterate over and sum.
As Chriseyre2000 has mentioned, CQRS aims at solving these exact issues.
Using DDD should not have significant performance implications in your scenario. What you worried about seems more like a data access issue. You refer to it as
initialize a Business Object and a Data Access Object
Why is 'initializing' expensive? What data access mechanisms are you using?
DDD with long-lived objects stored in a relational database is usually implemented with ORM. If used properly, ORM will have very little, if any, impact on performance for most applications. And you can always switch back the most performance-sensitive parts of the app to raw SQL if there is a proven bottleneck.
For what's it worth, NHibernate only needs to be initialized once on application startup, after that it uses the same ADO.NET connection pool as your regular data readers. So it all boils down to a proper mapping, fetching strategy and avoiding classic data access mistakes like 'n+1 selects'.

Where to put Entity Framework Data Model in MVC application? Specific example

First I want to refer to this post:
Where to put Entity Framework Data Model in MVC application?
My edmx will have 7-10 tables in it. Not more.
The problem is I have to build my model which I´m working with out of [lets say] 4 tables.
So I´m asking myself: Are these tables real model representations and would it be correct to put the edmx file in the "Models" folder and how should I name this CONTAINER of models?
Or are 10 tables enough to create a new project? How to call the project? .DataAccess? How to name the edmx file in it?
I don´t have that much experience with MVC and EF and am trying to figure out a best practice there.
Update: This post tells me not to put it in the Models folder: "The model should be decoupled from the backend data store technology as much as possible."
Personally my MVC projects (regardless of size) consist of the following as a minimum:
Data
Logic
Site
This structure seems to work pretty well as it separates business logic from storage and display.
You definitally don't want to put the EDMX in the models folder as that is reserved for view models. Best practice says that view models should be entirely disconnected from your storage entities.
In terms of naming the EDMX i normally name it after the short name of the project, the more important thing is to get the namespace right for the EDMX so your models sit in the correct namespace location.
My response is based on Silverlight and I understand it's a bit out of context because you are asking from MVC view point. But please allow me to illustrate where I put my EDMX
First project solution
-Widgets. These are multiple UI projects with multiple XAML pages
-UI logic is heavy orchestrating every widget and XAML pages in one main user interface
-View-Models. These are almost equivalent to controllers in MVC. I use XAML to directly bind to View-Models. Example QuotationItemModel.vb and xyz.vb and such. Multiple XAML pages may share 1 VM.
-XAML pages suppose to use command bindings as per implementating View-Models. Example button click is routed to VM. I didn't achieve this because the UI coordination logic (from another UI architect) was interfering with my hooking to delegate command
(of CanExecute, Execute Func(Of Object, Boolean) Action(Of Object) causing a stack overflow in first level widgets' click event.)
-Model. There is but one function here. Her job hooks a delegate to web service async call completed event and then triggers the webservice.
Deletegate's implementation actually sits back into in View-Model i.e. QuotationItemModel.vb and not inside Model. There is truly only one function in Model.vb
-There is no other logic in Model. i.e. Model.vb decides end points, http bindings, WCF stuffs
-There is no EDMX whatsoever in this solution. Model also knows nothing about database.
Second project (but inside third solution)
WCF implementation. Light weight. Again 1 function. Operation contracts only.
Code behind only pass business objects into third project.
Connection string for EDMX is configured here and passed to third project.
No other logic.
There is no awareness of EDMX whatsoever
Third project solution
-Begins with a simple factory to delegate logic and invoke classes
-Begins with simple factory logic becomes very heavy backend. Uses design patterns to alleviate maintenance concerns. From here, the patterns could criss cross between commands, strategy, or abstract types etc etc.
-The EDMX design is fully apparent in this layer
-Business objects interacts in logical manner with EDMX
-I either do LINQ to Entities or parameterized queries here
-This layer consist of business logic such as Underwriting ID must exist before a claim transaction can be issued. Or a quotation's running number sequence based on server date. etc etc
-There are some manual mapping of business objects to Entities. Potentially tedious but not always
-Result is passed back as XML
The third project could very well be separated solution with another lightweight webservice in between, producing readiness for 3 tier architecture. Then I will produce my own connection string to EDMX at this pure layer. But mine is now more like '2.5' layer 2 architecture. I sheepishly expose the connection string in middle tier's web config.
Architecture means having another hardware platform altogether. Layer are separation for domain driven design in problem space i.e. UI, communication and business domains. Technically speaking the database of SQL Server (beyond the EDMX) could very well sit in another architecture i.e. Windows Azure
There are pros and cons I see here. Please bring any criticisms gently, I am new to layering, really.
Cons
Without exposing data contracts my UI is blind when communicating in language of business objects and contracts. Previously this was easily achieved by having the EDMX in WCF layer.
I now used Xelement to represent shared business object. But I still need to figure a way to expose the data contract without exposing database internals. Currently, I 'instinctively' know and code the database fields in my Xelements.
Potentially it's like silent binding to backend EDMX. Silence is sometimes bad because if I get a column without data there are many suspected causes. Nothing that cannot be solved via good error messaging from the XML result passed-back. Using my imagination.
Weak mechanism for versioning. Perhaps new clients interacts with separate operation contract for a silent redirection to Backend-Ver 2.0 whilst the existing clients utilize Backend-Ver 1.0. This potentially mean you should now have 2 EDMX for each old and new database respectively
Pros
Extreme decoupling. I can delete/rebuild the EDMX and UI and WCF still compiles. Only my third solution will get compilation error in this extreme test effort.
From silverlight UI, triggering and communication to Microsoft Report Viewer report shares exactly same classes invoked from UI. There are no 'additional webservice function for report' whatsoever. Whatever EDMX + logic requested by UI exactly same for the report-- unless I chose it not.
PS: Silverlight communicates filter criteria to the report via query string.
The report again, is not aware of the EDMX. Example, if I delete the EDMX from backend and then update the data connection from report project and the report project still compiles without problems.
Readiness for migration to multiple architecture without tears. Seasonal load balancing, increase in customer base etc may trigger this investment in architecture.
Reusability of business logic. For example, if the boss gets tired of Silverlight, I just need to re-code the UI business objects, say, into JSON?? under HTML 5. There are no changes to business logic whatsoever, except new requirements. Example, to expand into Life Insurance to co-exist with existing General insurance, a module which is currently coded in Silverlight. Imagine Life Insurance in HTML 5 and still coexisting with same backend. Again, the reason is because both front end is not aware of EDMX I just need to focus on building data contract from within new technology.
Unexpected (I am new to layering really!) side effect. I potentially can test my backend separate from UI. Which in turn manipulate LINQ to Entity (that EDMX). Cool for unit testing.
Updating business logic does not effect new deployment to IIS (Middle layer) except maybe when it comes to versioning.
Anyway here's Layered Application Solution Guidance from talented software architect Serena Yeoh
Layered Architecture Sample for .NET
http://layersample.codeplex.com/
http://layerguidance.codeplex.com/
Notice in the sample which you download, the ingenuity of having multiple UI over different technologies over a common backend, where the EDMX live and sleep. And what's more, over windows workflow foundation, selectively called as needed. You can see where Serena put the EDMX and you have at it with workable running code. Pure bliss.

What’s the best approach to work with in memory domain objects in Grails?

I’m working on a Grail’s project that has some Domain objects not persisted on the database. They are managed thru a REST API, so all their CRUD operations will be done with this API instead of the database.
The point is to still be able to use some interesting Grails plug-ins (like searching using Compass).
For instance, the administration the Domain objects Users is going to be managed with the REST API, so when the Users list is displayed a the REST method to retrieve the list of users will be invoked on the remote server. I hope this use case is clear enough :)
I can think on several ways to design that but I'm not sure what’s the best:
Should I create the Domain Objects in the controller (and delete the
previous Users stored in memory)?
It seems it’s possible to define a Domain Class not persistable (with
mapping I think) but I’m not sure if this is the best approach or
where to load the data.
It is better not to model as a Grails the User as Domain object?
Thanks in advance!
I would wrap the REST interactions in a service, and call the service from a controller. In that case, the service would get the response and create its objects, passing the list back to the controller. Controllers should just handle incoming requests, invoke application components, and return responses.
It seems you want models to represent the data in the other application, which is a good idea. Since you don't need GORM, you might want to define them in the 'groovy' folder of your app instead of the domain models folder. Then I think they will just be objects.
I'd go with non-domain objects in src folder - though, need to check if it's possible to use the mentioned plugins with them.
I wonder what domain class functionality you wish to get out of non-persistent classes?

Resources