angularjs and ASP.NET MVC : best strategy for clientside models - asp.net-mvc

I'm currently looking into client side model binding to HTML templates especially with angularjs. I was wondering what the best strategy is for retrieving clientside viewmodels from the server, e.g. a viewmodel containing not only the data for editing but also the data for select lists or drop down lists etc..
As I see it , one has several options
retrieve one viewmodel from the server using e.g. web api, containing ALL the data needed for the view model
render a client side viewmodel to javascript inside the server side html
retrieve data for the viewmodel using multiple web api calls, e.g one for the main data to be edited, and one for each additional data (select lists)
I didn't encounter many examples for option 1 as it seems that web api is used mostly for crud operations returning specific data for one type of object e.g. Person or Order
option 2 conforms to the practice of server side view models with asp.net mvc but I have not seen many examples using this technique in combination with angularjs
option 3 looks clean if one considers seperation of concerns, but has the disadvantage of multiple smaller ajax requests.
Could you share your thoughts and experiences ?

Personally, I use option #3. The app would make requests to "prepare the editor", such as populating dropdown lists, and requests to fetch the data you want to edit (or, if you are creating a new object, any default initial values). I think this separates concerns better than option #1, which ties together "model" data and "support" data.
But as you pointed out, this does make extra calls, and, if they are very numerous, can noticeably slow down the page. (or increase complexity; on a big form with lots of dependent fields, ordering may become important).
What I usually do is have the server provide a "combined" api (e.g. /editor/prepare-all) while also providing small pieces (e.g. /editor/prepare-dropdown-1, /editor/prepare-dropdown-2). When your editor loads, you use the combined one; if there are dependencies between fields, you can request only the data for the dependent fields (e.g. /editor/prepare-dropdown-2?dropdown1-value=123). I believe this has little impact on the server's complexity.

I would agree with st. never and have definitely used option #3 and I think combining $resource and Web API would be a perfect RAD combination. However, I've also worked on very complex screens where I've wanted sub-second response times so I've resorted to optimise the entire development 'column' - I develop using SQL Server as my backend database so I've used it's native support for XML to return structured XML from a stored procedure which I then serialise into a .Net Model (POCO) which I then pass to a Json serialiser for transfer to the browser. I might have some extra business processing to perform against the POCO but this still leads to a very simple code structure to transfer a fairly complex structure of data. Typically it's also very fast because I've made one call to the database and monitoring and optimising one stored procedure is very simple.

Related

SAPUI5 Application Data in OData Model how to write back to backend system

I'm quite new to the Odata topic and try to understand what is the best practice scenario when working with OData service.
Sceanrio 1:
I have an complex application with several EntitySets coming from an remote Odata model, which is loaded from SAP Backend. I can read data and bind it to UI controls, thats not the problem, but what I am confused about is how I can/should write back data to the backend.
First assumption Odata is One-Way Binding:
The user manipulates inputFields , dropdowns ,tables and so on, and all data is writen to the Odata Model with createEntry() or setProperty(). Right? Or should i use another JSONModel and collect all user changes ?
Question : How do i transfer now this changes made on the Odata model to backend ? What is the best practive I have read something about batchprocessing or having an own service and trigger this one with the create() function ? Can someone just give some hints or some kind of a recipe.
Sceanrio 2:
Odata in Two-Way Binding ?
How does that work ? Which prerequisite must the backend provide in the OdataServices ? I read something that it's experimental.
YOu see I'm quite a little bit confused.
It's important to know what you will be getting if you use one-way or two-way binding. None of these binding actually involve writing data back to the back-end OData service.
In short:
One-way binding means that the model (e.g. ODataModel) only keeps your UI controls in sync. Changes made to the model, will also be cascaded to the UI controls bound to the model. However, when you change values in your UI controls, the updated value will not automatically be written back to the model.
Two-way binding means that the model keeps your UI in sync (similar to one-way binding), but on top of that, changes in your UI controls will also cascade back to the model. Two-way binding
In the one-way model, you would indeed need to programmatically update the model using createEntry and setProperty methods. Using two-way binding, this will be done automatically for you.
If you want changes to your model to be written back to your OData service on the server, you could run the 'submitChanges' method. This method will look at all changes made in the ODataModel and will send corresponding OData requests to the server to synchronise the changes with the back-end.
To make sure this is done in a consistent fashion, the ODataModel will wrap the required changes into a so-called change-set. The back-end then knows which requests belong together and will be able to roll-back all changes in a change-set whenever one of the changes fails. In ABAP you would call this a logical unit of work (LUW).
Because it may be necessary to send multiple requests to the server (e.g. if the change set change multiple entities), the ODataModel (v2) groups as many requests as possible in one batch. When this is switched on (which is the default), only one request is sent to the server instead of multiple requests, which increases performance. It would be advisable to only switch batching off for debugging purposes.
Please note that two way binding in sap.ui.model.odata.ODataModel used to be experimental, but please don't use that class anymore as it's old. Use sap.ui.model.odata.v2.ODataModel instead, as it is way better and supports lots more OData features (such as batches and two-way binding).
That's actually multiple answers in one, but I hope it clarifies some of the confusion.

Is it a good practice to use an MVC application's own Web API for Ajax bindings?

I'm writting an application that has many Ajax widgets (Kendo-UI to be percise). It's starting to get messy to have all those Ajax responses without the standard controllers so I was starting to consider making each entities their own controller. If I'm taking the time to do this, I figured I might as well go foward and do those as WebAPIs since I was planning to do this in a not so close future, but hey, it would be done already...
So my question is: Is it a good practice to use an MVC application's own Web API as a Ajax Widget feeds or is there any reason to stick with standard Controllers?
I've seen some arguments about performance, but I don't think this applies to this situation. I believe it was more of a "Controller calling WebAPI" situation which has obvious performance hits. But since it's already a client side Ajax call, weither it goes into a standard MVC Controller or a WebAPI controller shouldn't change a thing, would it?
Edit
Additional information regarding the project:
I am using Entity Framework for the data access.
I have a repository pattern going on with UnitOfWork.
I am using proper a MVC structure (EF POCOs AutoMapped to DTO POCOs in the repository and fed into View Models by the controllers)
This is a MVC 4 project on .NET 4.0
There is a lot of database relationships (specially for the object I'm working with at the moment)
I don't know about "good practice", but it's certainly not "bad practice". I see no difference whether you do it in the app or a different one.
I think its a good thing but only if what you are doing in the API is kept as generic as possible to other applications and services can reuse the API.
Both the applications I have written and continue to maintain use pretty much the exact same stack as your app.
I have recently re-factored one of the applications to use the API for all the common things like lists that I'm binding to Kendo ComboBoxes etc. in my views. Its a fairly large application that re-uses a lot of the same lists such as states, priorities, complexities across various Entities and views so it makes sense to put those in the API.
I haven't gone as far as going the whole hog through. I draw the line with things like this:
public ActionResult GetAjaxProjectsList([DataSourceRequest]DataSourceRequest request)
{
return Json((DataSourceResult)GetProjectsList().ToDataSourceResult(request), JsonRequestBehavior.AllowGet);
}
That is very specific to how the Kendo Grid wants the data back. Nothing else I have connecting to this app will use this data in this format so I keep it in the controller.
In short... I use the API for common things within the same MVC app and things that I allow to be used by other applications or services, like Excel.

Domain Driven Design vs Database Driven Design for an MVC Web application

I am expanding/converting a legacy Web Forms application into a totally new MVC application. The expansion is both in terms of technology as well as business use case. The legacy application is a well done Database Driven Design (DBDD). So for e.g. if you have different types of Employees like Operator, Supervisor, Store Keeper etc and you need to add a new type, you just go and add some rows in a couple of tables and voila, your UI automatically has everything to add/update the new type of Employee.
However the seperation of layers is not so good.
The new project has two primary goals
Extensibility (for currently and future pipeline requirements)
Performance
I intend to create the new project replacing the Database Driven Design (DBDD) with a Domain Driven Design (DDD) keeping the Extensibility requirement in mind. However moving from a Database Driven Design to Domain Driven Design seems to inversely impact the Performance requirement if I compare it to the performance of the legacy DBDD application. In the legacy application any call for data from the UI would directly interact with the Database and any data would be returned in form of a DataReader or (in some cases) a DataSet.
Now with a strict DDD in place any call for data will be routed through the Business layer and the Data Access layer. This would mean each call would initialize a Business Object and a Data Access Object. A single UI page could need different types of data and this being a Web application each page could be requested by multiple users. Also a MVC Web application being stateless, each request would need initializing the business objects and data access objects each and every time.
So it seems for an MVC stateless application the DBDD is preferrable to DDD for performance.
Or there a way in DDD to achieve both, Extensibility that DDD provides and performance that DBDD provides ?
Have you considered some form of Command Query Seperation where the updates are through the domain model yet reads come as DataReaders? Full blown DDD is not always appropriate.
"Now with a strict DDD in place any call for data will be routed through the Business layer and the Data Access layer."
I don't believe this is true, and it's certainly not practical. I believe this should read:
Now with strict DDD in place, any call for a transaction will be routed through the business layer and the data access layer.
There is nothing that says you can't call the data access layer directly in order to fetch whatever data you need to display on the screen. It is only when you need to amend data that you need to invoke your domain model that is designed based on its behavior. In my opinion this is a key distinction. If you route everything through your domain model you will have three problems:
Time - it'll take you MUCH longer to implement functionality, for no benefit.
Model Design - your domain model will be bent out of shape in order to meet the needs querying rather than behavior.
Performance - not because of an extra layer, but because you wont be able to get the aggregated data from your model as quickly as you can directly from a query. i.e. Consider the total value of all orders placed for a particular customer - its much faster to write a query for this than to fetch all order entities for the customer, iterate over and sum.
As Chriseyre2000 has mentioned, CQRS aims at solving these exact issues.
Using DDD should not have significant performance implications in your scenario. What you worried about seems more like a data access issue. You refer to it as
initialize a Business Object and a Data Access Object
Why is 'initializing' expensive? What data access mechanisms are you using?
DDD with long-lived objects stored in a relational database is usually implemented with ORM. If used properly, ORM will have very little, if any, impact on performance for most applications. And you can always switch back the most performance-sensitive parts of the app to raw SQL if there is a proven bottleneck.
For what's it worth, NHibernate only needs to be initialized once on application startup, after that it uses the same ADO.NET connection pool as your regular data readers. So it all boils down to a proper mapping, fetching strategy and avoiding classic data access mistakes like 'n+1 selects'.

Where to put Entity Framework Data Model in MVC application? Specific example

First I want to refer to this post:
Where to put Entity Framework Data Model in MVC application?
My edmx will have 7-10 tables in it. Not more.
The problem is I have to build my model which I´m working with out of [lets say] 4 tables.
So I´m asking myself: Are these tables real model representations and would it be correct to put the edmx file in the "Models" folder and how should I name this CONTAINER of models?
Or are 10 tables enough to create a new project? How to call the project? .DataAccess? How to name the edmx file in it?
I don´t have that much experience with MVC and EF and am trying to figure out a best practice there.
Update: This post tells me not to put it in the Models folder: "The model should be decoupled from the backend data store technology as much as possible."
Personally my MVC projects (regardless of size) consist of the following as a minimum:
Data
Logic
Site
This structure seems to work pretty well as it separates business logic from storage and display.
You definitally don't want to put the EDMX in the models folder as that is reserved for view models. Best practice says that view models should be entirely disconnected from your storage entities.
In terms of naming the EDMX i normally name it after the short name of the project, the more important thing is to get the namespace right for the EDMX so your models sit in the correct namespace location.
My response is based on Silverlight and I understand it's a bit out of context because you are asking from MVC view point. But please allow me to illustrate where I put my EDMX
First project solution
-Widgets. These are multiple UI projects with multiple XAML pages
-UI logic is heavy orchestrating every widget and XAML pages in one main user interface
-View-Models. These are almost equivalent to controllers in MVC. I use XAML to directly bind to View-Models. Example QuotationItemModel.vb and xyz.vb and such. Multiple XAML pages may share 1 VM.
-XAML pages suppose to use command bindings as per implementating View-Models. Example button click is routed to VM. I didn't achieve this because the UI coordination logic (from another UI architect) was interfering with my hooking to delegate command
(of CanExecute, Execute Func(Of Object, Boolean) Action(Of Object) causing a stack overflow in first level widgets' click event.)
-Model. There is but one function here. Her job hooks a delegate to web service async call completed event and then triggers the webservice.
Deletegate's implementation actually sits back into in View-Model i.e. QuotationItemModel.vb and not inside Model. There is truly only one function in Model.vb
-There is no other logic in Model. i.e. Model.vb decides end points, http bindings, WCF stuffs
-There is no EDMX whatsoever in this solution. Model also knows nothing about database.
Second project (but inside third solution)
WCF implementation. Light weight. Again 1 function. Operation contracts only.
Code behind only pass business objects into third project.
Connection string for EDMX is configured here and passed to third project.
No other logic.
There is no awareness of EDMX whatsoever
Third project solution
-Begins with a simple factory to delegate logic and invoke classes
-Begins with simple factory logic becomes very heavy backend. Uses design patterns to alleviate maintenance concerns. From here, the patterns could criss cross between commands, strategy, or abstract types etc etc.
-The EDMX design is fully apparent in this layer
-Business objects interacts in logical manner with EDMX
-I either do LINQ to Entities or parameterized queries here
-This layer consist of business logic such as Underwriting ID must exist before a claim transaction can be issued. Or a quotation's running number sequence based on server date. etc etc
-There are some manual mapping of business objects to Entities. Potentially tedious but not always
-Result is passed back as XML
The third project could very well be separated solution with another lightweight webservice in between, producing readiness for 3 tier architecture. Then I will produce my own connection string to EDMX at this pure layer. But mine is now more like '2.5' layer 2 architecture. I sheepishly expose the connection string in middle tier's web config.
Architecture means having another hardware platform altogether. Layer are separation for domain driven design in problem space i.e. UI, communication and business domains. Technically speaking the database of SQL Server (beyond the EDMX) could very well sit in another architecture i.e. Windows Azure
There are pros and cons I see here. Please bring any criticisms gently, I am new to layering, really.
Cons
Without exposing data contracts my UI is blind when communicating in language of business objects and contracts. Previously this was easily achieved by having the EDMX in WCF layer.
I now used Xelement to represent shared business object. But I still need to figure a way to expose the data contract without exposing database internals. Currently, I 'instinctively' know and code the database fields in my Xelements.
Potentially it's like silent binding to backend EDMX. Silence is sometimes bad because if I get a column without data there are many suspected causes. Nothing that cannot be solved via good error messaging from the XML result passed-back. Using my imagination.
Weak mechanism for versioning. Perhaps new clients interacts with separate operation contract for a silent redirection to Backend-Ver 2.0 whilst the existing clients utilize Backend-Ver 1.0. This potentially mean you should now have 2 EDMX for each old and new database respectively
Pros
Extreme decoupling. I can delete/rebuild the EDMX and UI and WCF still compiles. Only my third solution will get compilation error in this extreme test effort.
From silverlight UI, triggering and communication to Microsoft Report Viewer report shares exactly same classes invoked from UI. There are no 'additional webservice function for report' whatsoever. Whatever EDMX + logic requested by UI exactly same for the report-- unless I chose it not.
PS: Silverlight communicates filter criteria to the report via query string.
The report again, is not aware of the EDMX. Example, if I delete the EDMX from backend and then update the data connection from report project and the report project still compiles without problems.
Readiness for migration to multiple architecture without tears. Seasonal load balancing, increase in customer base etc may trigger this investment in architecture.
Reusability of business logic. For example, if the boss gets tired of Silverlight, I just need to re-code the UI business objects, say, into JSON?? under HTML 5. There are no changes to business logic whatsoever, except new requirements. Example, to expand into Life Insurance to co-exist with existing General insurance, a module which is currently coded in Silverlight. Imagine Life Insurance in HTML 5 and still coexisting with same backend. Again, the reason is because both front end is not aware of EDMX I just need to focus on building data contract from within new technology.
Unexpected (I am new to layering really!) side effect. I potentially can test my backend separate from UI. Which in turn manipulate LINQ to Entity (that EDMX). Cool for unit testing.
Updating business logic does not effect new deployment to IIS (Middle layer) except maybe when it comes to versioning.
Anyway here's Layered Application Solution Guidance from talented software architect Serena Yeoh
Layered Architecture Sample for .NET
http://layersample.codeplex.com/
http://layerguidance.codeplex.com/
Notice in the sample which you download, the ingenuity of having multiple UI over different technologies over a common backend, where the EDMX live and sleep. And what's more, over windows workflow foundation, selectively called as needed. You can see where Serena put the EDMX and you have at it with workable running code. Pure bliss.

Editing heirarchical data in ASP.NET MVC 2

Can anyone point me to some good resources that can help me understand the best way to work with hierarchical data in ASP.NET MVC 2?
I have an application under development that requires an interface allowing users to add, remove and modify children and grand-children of my root object. The user can make multiple changes without persistance. Only when they click "Save" will the entire object graph be saved.
I've seen one article that serialized the object and stored the data in a hidden field on the form but that seems really cludgy and I am dealing with a lot of data.
If I was doing this in standard ASP.NET, I'd be looking at using child windows and the like to display the edit pages and maintain an instance of the object being edited in Session - which is bad in and of itself. But I've been told we are using MVC as we are standardizing our platforms (but not moving up to MVC 3 yet).
Essentially I need that app to display the properties of my root which includes a child collection of objects. The UI should allow the user to add new items to the collection, remove existing items and 'open' an item for editing. These child items also contain their own list of grandchildren that is editable as well. All of this needs to go on without round-trips across the wire to persist data (its a distributed architecture with all data access behind a WCF service interface).
The examples on www.asp.net all persist the data each time a single change is made, i.e. each postback. But, that would require major schema changes and extra code to deal with temporary objects versus committed objects plus the overhead of the service calls each time. I'm looking for a better solution.
Have you considered looking at any client side libraries like Knockout.JS? I've found that it is excellent at manipulating collections and posting the final version as JSON. Here is an example of what you can do with it. Here is an article about how to integrate it with MVC 2. This is my absolute favorite JS library.

Resources