Breeze and RESTful WebAPI - breeze

Question:
What value does breeze provide when I need to implement my own POST/PUT/GET endpoints per entity in WebAPI?
Background:
This seems to be a common implementation of a serverside Breeze controller:
[BreezeController]
public class TodosController : ApiController {
readonly EFContextProvider<TodosContext> _contextProvider =
new EFContextProvider<TodosContext>();
// ~/breeze/todos/Metadata
[HttpGet]
public string Metadata() {
return _contextProvider.Metadata();
}
// ~/breeze/todos/Todos
// ~/breeze/todos/Todos?$filter=IsArchived eq false&$orderby=CreatedAt
[HttpGet]
public IQueryable<TodoItem> Todos() {
return _contextProvider.Context.Todos;
}
// ~/breeze/todos/SaveChanges
[HttpPost]
public SaveResult SaveChanges(JObject saveBundle) {
return _contextProvider.SaveChanges(saveBundle);
}
// other miscellaneous actions of no interest to us here
}
I'm in the middle of building a RESTish API that, up to this point, has endpoints like:
GET /api/todo/1
PUT /api/todo
POST /api/todo
It seems like Breeze requires the endpoints to be much simpler (for better or worse) - just a bunch of GETS and a SaveChanges POST endpoint.
This leads me to think that Breeze makes rapid development with a single web client, well, a breeze... but as soon as you have anonymous clients, you have to force them into whatever breeze interface conventions you've created in your client, which seems to defeat the purpose of RESTful API design. Is this the case?

Breeze is, first and formost, a client-side JavaScript framework. If you're not using Breeze on the client, the benefits of Breeze.WebApi are limited to
Enhanced OData query support ($select and $expand support, extended $orderby)
Save interception points (beforeSaveEntity and beforeSaveEntities events)
Save result handling (updated entity keys, concurrency columns)
Metadata extraction and serialization
As you've surmised, Breeze has a different philosophy from REST regarding CRUD operations.
Breeze is designed for clients who may want to C/U/D many resources, of different types, in a single transaction. This allows users to manipulate the data in complex ways without hitting the server, then saving their changes when they are ready. For example, one could create a new Order, move two OrderLineItems from one Order to another, delete a third OrderLineItem, modify the quantity on a fourth, and then SaveChanges(). Breeze even supports using localStorage to work completely disconnected from the server. Once reconnected, the changes can all be saved.
REST was designed to operate on one resource at a time. Each C/U/D operation must be performed against the server immediately so that the response code can be acted upon. It works well for applications with simple update needs, but not for data-entry applications. While transactions can be supported in REST, they are cumbersome at best.
Having said that, your server-side Breeze API is not limited to what you see in the Todos example. Breeze supports Named Saves, which allows you to have different endpoints for different operations. You can also use Save Interception to ensure that your save bundle only contains the types that it should. And naturally, there's nothing preventing you from exposing both APIs on your server, and having both fed by the same persistence layer.
If you have to decide between them, you should start with your users. Real users (not developers) don't care about REST, they care about what the application can do. Ultimately, REST gives your application all the semantics of HTTP, and Breeze gives it all the semantics of a relational or object database. Which one to expose to your users should depend upon the use cases you need to support.

Related

How can I access lookup data via an injected service from within an entity?

My application is using DDD with .NET Core and EF Core. I have some business rules that run within an entity that need to check dates against a cached list of company holiday dates. The company holidays are loaded from the db and cached by an application service that is configured with our DI container so it can be injected into our controllers, etc.
I cannot determine how, or if it's the right/best approach, to get the service injected into the entity so it can grab those dates when running business rules. I did find this answer that appears to demonstrate one way to do it, but I wanted to see if there were any additional options because that way has a bit of a code-smell to me upon first glance (adding a property to the DbContext to grab off the private constructor injected context).
Are there any other ways to accomplish something like this?
ORM classes are very rarely your domain objects. If you can start with your domain and seamlessly map to an ORM without the need for infrastructure specific alterations or attributes then that is fine; else you need to split your domain objects from your ORM objects.
You should not inject any services or repositories into aggregates. Aggregates should focus on the command/transactional side of the solution and work with pre-loaded state and should avoid requesting additional state through any handed mechanisms. The state should be obtained and handed to the aggregate.
In your specific scenario I would suggest loading your BusinessCalendar and then hand it to your aggregate when performing some function, e.g.:
public class TheAggregate
{
public bool AttemptRegistration(BusinessCalendar calendar)
{
if (!calendar.IsWorkingDay(DateTime.Now))
{
return false;
}
// ... registration code
return true;
}
// or perhaps...
public void Register(DateTime registrationDate, BusinessCalendar calendar)
{
if (!calendar.IsWorkingDay(registrationDate))
{
throw new InvalidOperationException();
}
// ... registration code
}
}
Another take on this is to have your domain ignore this bit and place the burden on the calling code. In this way if you ask you domain to do something it will do so since, perhaps, a registration on a non-working day (in my trivial example) may be performed in some circumstances. In these cases the application layer is responsible for checking the calendar for "normal" registration or overriding the default behaviour in some circumstances. This is the same approach one would take for authorisation. The application layer is responsible for authorisation and the domain should not care about that. If you can call the domain code then you have been authorised to do so.

IEnumerable vs IQueryable in OData and Repository Pattern

I watched this video and read this blog post. There is something in this post confused me; The last part of the post. In the last part Mosh emphasized, Repository should never return IQueryable, because it results in performance issue. But I read something that sounds contradictory.
This is the confusing part:
IEnumerable: While querying data from database, IEnumerable executes select query on server side, load data in-memory on client side and then filter data. Hence does more work and becomes slow.
IQueryable: While querying data from database, IQueryable executes select query on server side with all filters. Hence does less work and becomes fast.
this is another answer about IQueryable vs IEnumerable in Repository pattern.
These are opposite of Mosh's advice. If these are true, why we should not use IQueryable instead of IEnumerable.
And something else, What about situations that we want to use OData; As you know it’s better to use IQueryable instead of IEnumerable when querying by OData.
one more thing, is it good or bad to use OData for querying e-Commerce website APIs.
please let me know your opinion.
Thank you
A repository should never return a IQueryable. But not due to performance. It's due to complexity. A repository is about reducing the complexity in the business layer.
Buy exposing an IQueryable you increase the complexity in two ways:
You leak persistence knowledge to the business domain. There is things that you must know about the underlying Linq to Sql provider to write effective queries.
You must design the business entities so that querying them is possible (i.e. not pure business entities).
Examples:
var blockedUsers = _repository.GetBlockedUsers();
//vs
var blockUsers = _dbContext.Users.Where(x => x.State == 1);
var user = _repos.GetById(1);
//and an enum is used internally in the user class
user.Block();
_repos.Update(user);
// vs
var user = _dbContext.Users.FirstOrDefault(x => x.Id == 1);
user.State = 1;
_dbContext.SaveChanges();
By wrapping everything behind your repository, you design your business entities in a way that make it easy to work with them (child entites, enums, date management etc). And you design the repository so that those entities can be stored in an efficient way. No compromises and code that is more easily maintained.
Regarding OData: Do not use the repository pattern. It doesn't add any value in that case.
If you insist on using IQueryable in your business domain, do not use the repository pattern. It would only complicate things without adding any value.
Finally:
Business logic that uses properly designed repositories is so much easier to test (unit tests). Code where LINQ and business logic is mixed must ALWAYS be integration tests (against a DB) since Linq to Sql differs from Linq to Objects.

Proper implementation of Repository Pattern using IQueryables?

I'm building a Repository layer for my MVC application with methods like GetObject, UpdateObject, DeleteObject, etc.
This is what I have now:
public List<Object> GetObjects()
{
return _db.Objects.Where(o => o.IsArchived == false).ToList();
}
But I'm wondering if it would be better to return IQueryables for lists so that the least amount of data gets sent to the client when filters are applied in the UoW or Service layers. Would it be best to do something like this?
public IQueryable<Object> GetObjects()
{
return _db.Objects.Where(o => o.IsArchived == false);
}
The not nice thing about returning IQueryable, is that if you ever have a different implementation of repository, say using different ORM, storing data in non-SQL database, cloud or XML file, it would be hard to implement same interface. It would be much easier to implement if you return more generic colections of domain objects. For example IEnumerable. You can always pass filtering criteria in.
The other drawback of returning IQueryable, is that it may happen, that when you actually run the query your object context may be already disposed (Depending on your implementation) or may be kept in memory longer than required.
A leaky abstraction such as IQueryable could cause problems, for example imagine you want to get some data from database and order it by Guid. If you enumerate the query by calling ToList() prior to sorting, you'll get different results if you do it after. The reason is that in first case the sorting will happen in .NET, but in other case it will happen in SQL which uses completely different order.
The nice thing about returning IQueryable here is that you can continue to build up your query further without hitting the db. Once you call ToList it will hit the db and you can't customize your query further without hitting the database a second time.

Time to start returning IQueryable<T> instead of IList<T> to my Web UI / Web API Layer?

I've got a multi-layer application that starts with the repository pattern for all data access and it returns IQueryable to the Services layer. The Services layer, which includes all of the business logic, returns IList to the Controllers (note: I'm using ASP.NET MVC for the UI layer). The benefit of returning IQueryable in the data access layer is that it allows my repositories to be extremely simple and the database queries to be deferred. However, I'm triggering the database queries in my services layer so that my unit tests is more reliable and I don't give flexibility to the Controllers to reshape my queries. However, I've recently encountered several situations where deferring the execution of queries down to the Controllers would have been significantly more performant because the Controllers had to do some projections on the data that was UI specific. Additionally, with the emergence of things like oData, I was starting to wonder if end points (e.g. web UI or web apis) should be working directly with IQueryable. What are your thoughts? Is it time to start returning IQueryable from the services layer to the UI layer? Or stick with IList?
This thread here: To return IQueryable<T> or not return IQueryable<T>
seems to vouch for returning IList to the UI layers, but I was wondering if things are changing because of new emerging technologies and techniques.
I like to stick with the IQueryable Interface when possible, the only problem is when you end up doing complex filtering or re-query on demand at the Controller level, if you have something like:
//DATA ACCESS
public IQueryable<T> GetStudents()
{
return db.Students;
}
And in your controller you do some re-sharping because your client want to filter some data of that result, surely you will be tempted to do it at the controller level:
var result = obj.GetStudents().Where(d=>d...);
and for me its ok, but just imaging if any other module need to use that same filter, you cant call it because its on the controller level.
So for me its a thing of balance between DRY, flexibility, and how scalable is the system.
If you need a fully scalable system you will need to do some or several overloads to GetStudents() method and get rid of any re-sharping at the controller level.

Is my ASP.NET MVC application structured properly?

I've been going through the tutorials (specifically ones using Linq-To-Entities) and I understand the basic concepts, however some things are giving me issues.
The tutorials usually involve only simple models and forms that only utilize basic create, update and delete statements. Mine are a little more complicated, and I'm not sure I'm going about this the right way because when it comes time to handle the relationships of a half dozen database objects, the tutorials stop helping.
For the post method, the usual way of performing CRUD operations
entities.AddToTableSet(myClass);
entities.SaveChanges();
Won't do what I want, because a fully implemented class isn't getting posted to the controller method. I may post individual fields, form collections, or multiple DTO objects and then call a method on a service or repository to take the information I receive from a form post, along with information that it needs to query for or create itself, and then from all of those things, create my database object that I can save.
[AcceptVerbs(HttpVerbs.Post)]
public ActionResult Add(int id, [Bind(Exclude = "Id")] ClassA classA,
[Bind(Exclude = "Id")]ClassB classB)
{
// Validation occurs here
if(!ModelState.IsValid)
return View();
try
{
_someRepositoryOrService.Add(id, classA, classB);
return RedirectToAction("Index", new { id = id });
}
catch(Exception ex)
{
// Logging and exception handling occurs here
}
}
public void Add(int id, ClassA classA, ClassB classB)
{
EntityA eA = new EntityA
{
// Set a bunch of properties using the two classes and
// whatever queries are needed
};
EntityB eB = new EntityB
{
// Set a bunch of properties using the two classes and
// whatever queries are needed
};
_entity.AddToEntityASet(eA);
_entity.AddToEntityBSet(eB);
_entity.SaveChanges();
}
Am I handling this correctly or am I bastardizing the framework? I never actually use an entity object directly, whenever I query for one I put the information I need in a DTO and base my Views off of that. Same goes with the creation. Is this allowed, or is my avoidance of using entities directly going against the purpose of using the framework?
Edit: I'm also worried about this approach because it requires empty constructors to properly do the LINQ queries because of this error message:
Only parameterless constructors and
initializers are supported in LINQ to
Entities.
This isn't a big deal since I rarely need logic int the constructors, but is this an issue to have no constructors and only public properties?
_someRepositoryOrService.Add(id, classA, classB);
I would say you couple your repositories with presentation layer. This shouldn't be. Your repositories should only work with entities. Next, notice how your Add method
public void Add(int id, ClassA classA, ClassB classB)
breaks Separation of Concerns (SoC). It performs two tasks:
map view data into entities
save to repository
Obviously the first step should be done in presentation layer. Consider using model binders for this. It can also help you to solve the constructors problem, since your model binders can be made aware of the construction requirements.
Check also this excellent post by Jimmy Bogard (co-author of ASP.NET MVC In Action) about ViewModels. This might help you to automate mapping. It also suggest a reversed technique - make your controllers work with entities, not ViewModels! Custom action filters and model binders are really the key to eliminate routine that that don't really belong to controllers but rather an infrastructure detail between view and controller. For example, here's how I automate entities retrival. Here's how I see what controllers should do.
The goal here is to make controllers contentrate on managing business logic, putting aside all the technical details that do not belong to your business. It's techical constraints that you talk about in this question, and you let them leak into your code. But you can use MVC tools to move the to them infrastructure level.
UPDATE: No, repositories shouldn't handle form data, that's what I mean by "coupling with presentation". Yes, repositories are in the controller, but they don't work with form data. You can (not that you should) make form work with "repositories data" - i.e. entities - and that's what most examples do, e.g. NerdDinner - but not the other way. This is because of the general rule of thumb - higher layers can be coupled with lower ones (presentation coupled with repositories and entities), but never low level should be coupled to higher ones (entities depend on repositories, repositories depend on form model, etc).
The first step should be done in the repository, that's right - except that mapping from ClassX to EntityX does not belong to that step. It's mapping concern - an infrastructure. See for example this question about mapping, but generally if you have two layers (UI and repositories) they shouldn't care about mapping - a mapper service/helper should. Beside Jimmy's blog, you can also read ASP.NET MVC In Action or simply look at their CodeCampServer for how they do mapping with IEntityMapper interfaces passed to controller constructors (note that this is more manual and less-work approach that Jimmy Bogard's AutoMapper).
One more thing. Read about Domain Driven Design, look for articles, learn from them, but you don't have to follow everything. These are guidelines, not strict solutions. See if your project can handle that, see if you can handle that, and so on. Try to apply this techniques since they're generally the excellent and approved ways of doing development, but don't take them blindly - it's better to learn along the way than to apply something you don't understand.
I would say using DTOs and wrapping the Entity Framework with your own data access methods and business layer is a great way to go. You may end up writing a lot of code, but it's a better architecture than pretending the Entity Framework generated code is your business layer.
These issues aren't really necessarily tied to ASP.NET MVC in any way. ASP.NET MVC gives basically no guidance on how to do your model / data access and most of the samples and tutorials for ASP.NET MVC are not production worthy model implementations, but really just minimal samples.
It seems like you are on the right track, keep going.
In the end, you are using Entity Framework mostly as a code generator that isn't generating very useful code and so you may want to look into other code generators or tools or frameworks that more closely match your requirements.

Resources