Async Controllers in ASP.NET MVC - asp.net-mvc

I have a "blog" website developed using ASP.NET MVC 1. Recent version of MVC includes AsyncController feature. This actually requires some additional task in development. But how can I reuse my existing code without modifying my business layer.
Some part of the code looks like:
BlogPost post = new BlogPost();
post.GetPost(58345);
BlogComment comments = new BlogComment();
comments.GetComments(58345);
As per the current environment, I need to wait till the request completes two operations. Using AsyncController, I can do two operations simultaneously. But the classes BlogPost and BlogComment requires to be changed to support for asynchronous operations like adding EventHandlers to know whether the operation is completed and etc.
How can I do asynchronous operation without modifying existing business layer.

You could do this:
public class BlogController : AsyncController
{
private readonly IBlogRepository _repository;
public BlogController(IBlogRepository repository)
{
_repository = repository;
}
public void ShowAsync(int id)
{
AsyncManager.OutstandingOperations.Increment(2);
new Thread(() =>
{
AsyncManager.Parameters["post"] = _repository.GetPost(id);
AsyncManager.OutstandingOperations.Decrement();
}).Start();
new Thread(() =>
{
AsyncManager.Parameters["comments"] = _repository.GetComments(id);
AsyncManager.OutstandingOperations.Decrement();
}).Start();
}
public ActionResult ShowCompleted(Post post, IEnumerable<Comment> comments)
{
return View(new BlogViewModel
{
Post = post,
Comments = comments,
});
}
}
You should measure the performance of your application and decide whether introducing an async controller brings any value to the performance.

First, why do you think you need to do Async Controllers? Are you experiencing some performance problem that you thinkn Async will help you with? Why complicate your application with Async handling if you do not really need it?
Async is really designed to handle much more massive scaling, or when you need to do non-cpu bound operations in your controllers that might take a long time to execute.
Second, I think you are a bit confused about how Async controllers operate. You don't need to modify your business layer in most cases, you simply need to create an async "shim" to wrap your business layer. Async does not mean "multi-threaded". it will still do one thread per request, and you will still call your business logic single threaded (unless you write code to do things multi-threaded).
All Async controllers do is allow for better utilization of the Thread Pool. When you have threads that are not CPU bound, they can be returned to the thread pool while waiting for your request to be re-activated, thus allowing the thread pool to be better utlized, rather than using a thread doing nothing but waiting.
If you need to call multiple operations, you use the AsyncManager.OutstandingOperations property to control how many operaitons must complete to complete the request.

Related

Are there benefits of using asynchronous action methods in Asp Net MVC application that consumes Asp Net Web Api with synchronous action methods?

I am developing an Asp Net MVC client that will use Asp Net Web Api. And I have to decide on how to better design my application. After searching the web for a while I found people suggesting (for example, here) to make MVC action methods asynchronous. I also found out that the main benefit of having asynchronous action methods is scalability of the server, i.e., the server will be able to serve more requests. There is one thing though, the web api I'm consuming has synchronous action methods, and will run on the same server. So my guess is there are no benefits for me to implement asynchronous action methods for my MVC, because even if my MVC action methods are asynchronous and the server will be able to scale from the "MVC point of view" in the end these methods will still consume synchronous Web Api methods, and because of this, the server will "inevitably" run out of its thread pool. Maybe I'm missing something or there are some other benefits of asynchronous action methods?
Here is a very simple sample code I wrote to make you better understand my issue:
This is the web api controller:
public class UsersController : ApiController
{
private readonly IUserService _userService;
public UsersController(IUserService userService)
{
_userService = userService;
}
// As you can see this method is not asynchronous
public User Get(int id)
{
return _userService.GetUserById(id);
}
// some other code
}
This is the Mvc controller, I have two choices how to design my action methods:
public class UsersController : Controller
{
// A) Make simple synchronous action methods
public ActionResult UserPageSync()
{
IUserWebServiceSync userWebServiceSync = new UserWebServiceSync();
User user = userWebServiceSync.GetUserById(1);
return View();
}
// B) Make asynchronous action methods
public async Task<ActionResult> UserPageAsync()
{
IUserWebServiceAsync userWebServiceAsync = new UserWebServiceAsync();
User user = await userWebServiceAsync.GetUserByIdAsync(1);
return View();
}
}
the web api I'm consuming has synchronous action methods, and will run on the same server
This is a highly unusual design. But I'll ignore that.
You can think about it this way: if the MVC actions are synchronous, then they will take 2 threads per request (one for the MVC action, one for the WebAPI action). If the MVC actions are asynchronous, then they will take 1 thread per request (none for the MVC action, one for the WebAPI action). So there's still a clear benefit to doing async on your MVC actions.

Does lots of controllers slows the performance? MVC

I want to ask a simple question about MVC controllers. I have googled a lot about controllers for "different controllers for each basic table", it cleared a lot of things but i have one question that i couldn't find answer for.
My question is that if i create controller for each basic table, lets say i have 10 basic tables that would create 10 controllers. So does lots of controller slows the application performance?
- In case, when going from view to controller.
- In case, when going from controller to another controller.
I am new so kindly be calm :)
Usually, one request is processed by one controller. And if it (cotroller) is small and have a few dependencies - it's quick. When you have one huge controller with many dependencies of other classes that have their own dependencies and so on... it could be a problem.
The Short Answer
No.
The Long Answer
The number of controllers doesn't have as much of a performance impact as how expensive each controller instance is to create.
The amount of overhead you might get for the number of controllers is negligible. Although the MVC framework uses .NET Reflection to identify the current controller type, it is optimized to look in the <Project Name>.Controllers namespace first. But this list is cached in a file, so after the first hit the performance is pretty good.
Where you might run into performance problems is when you do heavy processing within the controller constructor. The framework creates a controller instance for every request, so you should make it as cheap as possible to create a controller instance. If you follow a DI-centric (dependency injection) approach even if you are not actually using DI in your project, you will be able to keep the cost of creating a controller instance to a bare minimum.
What this means in plain English is - inject your dependencies into the constructor when the controller is created only. Don't actually do any processing in the constructor, defer that for the actual Action method call.
public interface IHeavyProcessingService
{
IProcessingResult DoSomethingExpensive();
}
public class HeavyProcessingService : IHeavyProcessingService
{
public HeavyProcessingService() {
}
public IProcessingResult DoSomethingExpensive() {
// Lots of heavy processing
System.Threading.Thread.Sleep(300);
}
}
public class HomeController
{
private readonly IHeavyProcessingService heavyProcessingService;
// The constructor does no heavy processing. It is deferred until after
// the instance is created by HeavyProcessingService.
// The only thing happening here is assignment of dependencies.
public HomeController(IHeavyProcessingService heavyProcessingService) {
this.heavyProcessingService = heavyProcessingService
?? throw new ArgumentNullException(nameof(heavyProcessingService));
};
public ActionResult Index()
{
var result = this.heavyProcessingService.DoSomethingExpensive();
// Do something with the result of the heavy processing
return View();
}
public ActionResult About()
{
return View();
}
public ActionResult Contact()
{
return View();
}
}
See this answer for more information.
If you do actually use a DI container in your application, you can improve performance even more by choosing the correct lifestyle of each dependency. If you can share the same dependency instance across multiple controller instances (singleton lifestyle), it makes the controller instance even cheaper to create.
What Backs said isn't necessarily true, either. The number of dependencies doesn't matter so much as how expensive those dependencies are to create. As long as the constructors are kept light and simple and the correct lifestyle is used for each dependency, performance won't be an issue regardless of the number of dependencies a controller has. That said, the controller shouldn't have more than about 5 direct dependencies - after that, you should refactor to aggregate services, making the dependency hierarchy more like an upside down pyramid rather than a flat set that are all injected into the controller.
It depends on the number of calls to controller.If you make frequent call to a controller for 2 to 3 table so it may get slow.Instead group that 3 table in one controller and call that.If your application needs to work in individual table than its fine,You will get response quiker.But if your application needs content from 2 to 3 tables than you have to call that 3 controller.So here the better way is to group that in one controller.
Hope you got the point

better organise database operation codes in mvc

I am new to .net mvc ,and here is my situation, in mvc solution, I have data modal and reposity,also have ioc container, when comes to operate data operation,should I put all my logical code in the controller?or there are any better way?
public ActionResult SomeOperate(Person person)
{
var reposity = _kernel.Get<IReposity<Person>>();
//what about if there are many database operation logical based on my generic reposity,should I put them all here?
return RedirectToAction("SomeWhere");
}
EDIT1
my generic reposity have already support basic database operations such as add,update,remove,query transaction
By default, the controller can contain business logic (and its okay). But as your application grows in size, you start doubting whether the controller should be responsible for containing the business logic.
In a more advance architecture, the Controller only acts as a "Coach" and let players do the job. In other words, the controller only worries about who should do what. Hence the name controller.
The Service Layer
The Service Layer is just a collection of classes created for one purpose, to encapsulate your business layer, moving the logic away from the controller.
See my example below for a basic implementation of a service.
Service
public class ProductService
{
private IProductRepository productRepository;
public ProductService(IProductRepository productRepository)
{
this.productRepository = productRepository;
}
public IEnumerable<Product> ListProducts()
{
return this.productRepository.ListProducts();
}
public void CreateProduct(Product productToCreate)
{
// Do validations here.
// Do database operation.
this.productRepository.Create(productToCreate);
}
}
Controller
// Controller
public class ProductController : Controller
{
private ProductService productService;
// If you are wondering how to instantiate this
// controller, see the ninject tutorial
// http://www.codeproject.com/Articles/412383/Dependency-Injection-in-asp-net-mvc4-and-webapi-us
public ProductController(ProductService productService)
{
this.productService = productService;
}
public ActionResult Index()
{
IEnumerable<Product> products = this.productService.ListProducts();
return View(products);
}
public ActionResult Create()
{
return View();
}
[HttpPost]
public ActionResult Create(Product productToCreate)
{
if(!ModelState.IsValid)
{
return View();
}
this.productService.Create(productToCreate);
return RedirectToAction("Index");
}
The full tutorial straight from Microsoft: http://www.asp.net/mvc/tutorials/older-versions/models-(data)/validating-with-a-service-layer-cs
UPDATE
Why use a service layer
https://softwareengineering.stackexchange.com/questions/162399/how-essential-is-it-to-make-a-service-layer
Service per Model/Entity
With regards to the number of service per model, there is no absolute rule. Most of the time it can scale to one-to-one and sometimes one-to-many (referred service per module)
The number routines in a single service class depends on the number of operations in the UI, meaning if there is no delete button anywhere in the system then there shouldn't be a delete method anywhere in your code. In other words, CRUD should only apply when needed.
Service per Module
Sometimes a service can scale to multiple models, given there is an operation that requires you to updated multiple models. This is sometimes referred as "service per module", this is when a service does not represent a model but rather an operation.
RetireService
- RetireEmployee(User user)
MailingService
- SendWeeklyMails()
Services and Interfaces
Most of the time, interfaces are not required for a service layer. The only time that they are usually for the following reasons:
Large team (5 or more)
Large system
Heavy test driven development.
This link extends much on this subject:
https://softwareengineering.stackexchange.com/questions/159813/do-i-need-to-use-an-interface-when-only-one-class-will-ever-implement-it
The Single-Responsibility Principle would indicate that you should identify one responsibility for each class, and avoid putting logic into the class that doesn't pertain to that responsibility.
If this is just an exercise in learning technologies, or a small project, then you're probably safe putting most of your Entity manipulations and business logic in the individual controller actions. If this project is likely to grow, and need to be maintained by various people over time, you're probably better off defining an N-Tier architecture right off the bat.
How you do this will depend somewhat on personal preference and the nature of your project, but a popular approach is to create a "Service" or "Manager" layer where all of your business logic resides. Then the various controller actions invoke the actions on that layer, transform them into ViewModel objects, and pass them off to the views. In this architecture, Controllers end up begin very light-weight, and are focused mostly on transforming requests into service calls and composing the data that the Views will need to render.
Many people feel that the ORM (e.g. Entity Framework) represents the "data access layer," and they don't see a need to create an additional layer beyond the service layer. Other people create individualized classes to hold the queries and commands to Entity Framework, and the Service layer leverages these various other classes.

ICommandHandler/IQueryHandler with async/await

EDITH says (tl;dr)
I went with a variant of the suggested solution; keeping all ICommandHandlers and IQueryHandlers potentially aynchronous and returning a resolved task in synchronous cases. Still, I don't want to use Task.FromResult(...) all over the place so I defined an extension method for convenience:
public static class TaskExtensions
{
public static Task<TResult> AsTaskResult<TResult>(this TResult result)
{
// Or TaskEx.FromResult if you're targeting .NET4.0
// with the Microsoft.BCL.Async package
return Task.FromResult(result);
}
}
// Usage in code ...
using TaskExtensions;
class MySynchronousQueryHandler : IQueryHandler<MyQuery, bool>
{
public Task<bool> Handle(MyQuery query)
{
return true.AsTaskResult();
}
}
class MyAsynchronousQueryHandler : IQueryHandler<MyQuery, bool>
{
public async Task<bool> Handle(MyQuery query)
{
return await this.callAWebserviceToReturnTheResult();
}
}
It's a pity that C# isn't Haskell ... yet 8-). Really smells like an application of Arrows. Anyway, hope this helps anyone. Now to my original question :-)
Introduction
Hello there!
For a project I'm currently designing an application architecture in C# (.NET4.5, C#5.0, ASP.NET MVC4). With this question I hope to get some opinions about some issues I stumbled upon trying to incorporate async/await. Note: this is quite a lengthy one :-)
My solution structure looks like this:
MyCompany.Contract (Commands/Queries and common interfaces)
MyCompany.MyProject (Contains the business logic and command/query handlers)
MyCompany.MyProject.Web (The MVC web frontend)
I read up on maintainable architecture and Command-Query-Separation and found these posts very helpful:
Meanwhile on the query side of my architecture
Meanwhile on the command side of my architecture
Writing highly maintainable WCF services
So far I've got my head around the ICommandHandler/IQueryHandler concepts and dependency injection (I'm using SimpleInjector - it's really dead simple).
The Given Approach
The approach of the articles above suggests using POCOs as commands/queries and describes dispatchers of these as implementations of the following handler interfaces:
interface IQueryHandler<TQuery, TResult>
{
TResult Handle(TQuery query);
}
interface ICommandHandler<TCommand>
{
void Handle(TCommand command);
}
In a MVC Controller you'd use this as follows:
class AuthenticateCommand
{
// The token to use for authentication
public string Token { get; set; }
public string SomeResultingSessionId { get; set; }
}
class AuthenticateController : Controller
{
private readonly ICommandHandler<AuthenticateCommand> authenticateUser;
public AuthenticateController(ICommandHandler<AuthenticateCommand> authenticateUser)
{
// Injected via DI container
this.authenticateUser = authenticateUser;
}
public ActionResult Index(string externalToken)
{
var command = new AuthenticateCommand
{
Token = externalToken
};
this.authenticateUser.Handle(command);
var sessionId = command.SomeResultingSessionId;
// Do some fancy thing with our new found knowledge
}
}
Some of my observations concerning this approach:
In pure CQS only queries should return values while commands should be, well only commands. In reality it is more convenient for commands to return values instead of issuing the command and later doing a query for the thing the command should have returned in the first place (e.g. database ids or the like). That's why the author suggested putting a return value into the command POCO.
It is not very obvious what is returned from a command, in fact it looks like the command is a fire and forget type of thing until you eventually encounter the weird result property being accessed after the handler has run plus the command now knows about it's result
The handlers have to be synchronous for this to work - queries as well as commands. As it turns out with C#5.0 you can inject async/await powered handlers with the help of your favorite DI container, but the compiler doesn't know about that at compile time so the MVC handler will fail miserably with an exception telling you that the method returned before all asynchronous tasks finished executing.
Of course you can mark the MVC handler as async and this is what this question is about.
Commands Returning Values
I thought about the given approach and made changes to the interfaces to address issues 1. and 2. in that I added an ICommandHandler that has an explicit result type - just like the IQueryHandler. This still violates CQS but at least it is plain obvious that these commands return some sort of value with the additional benefit of not having to clutter the command object with a result property:
interface ICommandHandler<TCommand, TResult>
{
TResult Handle(TCommand command);
}
Naturally one could argue that when you have the same interface for commands and queries why bother? But I think it's worth naming them differently - just looks cleaner to my eyes.
My Preliminary Solution
Then I thought hard of the 3rd issue at hand ... some of my command/query handlers need to be asynchronous (e.g. issuing a WebRequest to another web service for authentication) others don't. So I figured it would be best to design my handlers from the ground up for async/await - which of course bubbles up to the MVC handlers even for handlers that are in fact synchronous:
interface IQueryHandler<TQuery, TResult>
{
Task<TResult> Handle(TQuery query);
}
interface ICommandHandler<TCommand>
{
Task Handle(TCommand command);
}
interface ICommandHandler<TCommand, TResult>
{
Task<TResult> Handle(TCommand command);
}
class AuthenticateCommand
{
// The token to use for authentication
public string Token { get; set; }
// No more return properties ...
}
AuthenticateController:
class AuthenticateController : Controller
{
private readonly ICommandHandler<AuthenticateCommand, string> authenticateUser;
public AuthenticateController(ICommandHandler<AuthenticateCommand,
string> authenticateUser)
{
// Injected via DI container
this.authenticateUser = authenticateUser;
}
public async Task<ActionResult> Index(string externalToken)
{
var command = new AuthenticateCommand
{
Token = externalToken
};
// It's pretty obvious that the command handler returns something
var sessionId = await this.authenticateUser.Handle(command);
// Do some fancy thing with our new found knowledge
}
}
Although this solves my problems - obvious return values, all handlers can be async - it hurts my brain to put async on a thingy that isn't async just because. There are several drawbacks I see with this:
The handler interfaces are not as neat as I wanted them to be - the Task<...> thingys to my eyes are very verbose and at first sight obfuscate the fact that I only want to return something from a query/command
The compiler warns you about not having an appropriate await within synchronous handler implementations (I want to be able to compile my Release with Warnings as Errors) - you can overwrite this with a pragma ... yeah ... well ...
I could omit the async keyword in these cases to make the compiler happy but in order to implement the handler interface you would have to return some sort of Task explicitly - that's pretty ugly
I could supply synchronous and asynchronous versions of the handler interfaces (or put all of them in one interface bloating the implementation) but my understanding is that, ideally, the consumer of a handler shouldn't be aware of the fact that a command/query handler is sync or async as this is a cross cutting concern. What if I need to make a formerly synchronous command async? I'd have to change every consumer of the handler potentially breaking semantics on my way through the code.
On the other hand the potentially-async-handlers-approach would even give me the ability to change sync handlers to be async by decorating them with the help of my DI container
Right now I don't see a best solution to this ... I'm at a loss.
Anyone having a similar problem and an elegant solution I didn't think of?
Async and await don't mix perfectly with traditional OOP. I have a blog series on the subject; you may find the post on async interfaces helpful in particular (though I don't cover anything you haven't already discovered).
The design problems around async are extremely similar to the ones around IDisposable; it's a breaking change to add IDisposable to an interface, so you need to know whether any possible implementation may ever be disposable (an implementation detail). A parallel problem exists with async; you need to know whether any possible implementation may ever be asynchronous (an implementation detail).
For these reasons, I view Task-returning methods on an interface as "possibly asynchronous" methods, just like an interface inheriting from IDisposable means it "possibly owns resources."
The best approach I know of is:
Define any methods that are possibly-asynchronous with an asynchronous signature (returning Task/Task<T>).
Return Task.FromResult(...) for synchronous implementations. This is more proper than an async without an await.
This approach is almost exactly what you're already doing. A more ideal solution may exist for a purely functional language, but I don't see one for C#.
You state:
the consumer of a handler shouldn't be aware of the fact that a
command/query handler is sync or async as this is a cross cutting
concern
Stephen Clearly already touched this a bit, but async is not a cross-cutting concern (or at least not the way it's implemented in .NET). Async is an architectural concern since you have to decide up front to use it or not, and it completely chances all your application code. It changes your interfaces and it's therefore impossible to 'sneak' this in, without the application to know about it.
Although .NET made async easier, as you said, it still hurts your eyes and mind. Perhaps it just needs mental training, but I'm really wondering whether it is all worth the trouble to go async for most applications.
Either way, prevent having two interfaces for command handlers. You must pick one, because having two separate interfaces will force you to duplicate all your decorators that you want to apply to them and duplicates your DI configation. So either have an interface that returns Task and uses output properties, or go with Task<TResut> and return some sort of Void type in case there is no return type.
As you can imagine (the articles you point at are mine) my personal preference is to have a void Handle or Task Handle method, since with commands, the focus is not on the return value and when having a return value, you will end up having a duplicate interface structure as the queries have:
public interface ICommand<TResult> { }
public interface ICommandHandler<TCommand, TResult>
where TCommand : ICommand<TResult>
{
Task<TResult> Handle(TCommand command);
}
Without the ICommand<TResult> interface and the generic type constraint, you will be missing compile time support. This is something I explained in Meanwhile... on the query side of my architecture
I created a project for just this - I wound up not splitting commands and queries, instead using request/response and pub/sub - https://github.com/jbogard/MediatR
public interface IMediator
{
TResponse Send<TResponse>(IRequest<TResponse> request);
Task<TResponse> SendAsync<TResponse>(IAsyncRequest<TResponse> request);
void Publish<TNotification>(TNotification notification) where TNotification : INotification;
Task PublishAsync<TNotification>(TNotification notification) where TNotification : IAsyncNotification;
}
For the case where commands don't return results, I used a base class that returned a Void type (Unit for functional folks). This allowed me to have a uniform interface for sending messages that have responses, with a null response being an explicit return value.
As someone exposing a command, you explicitly opt-in to being asynchronous in your definition of the request, rather than forcing everyone to be async.
Not really an answer, but for what it's worth, i came to the exact same conclusions, and a very similar implementation.
My ICommandHandler<T> and IQueryHandler<T> return Task and Task<T> respectively. In case of a synchronous implementation i use Task.FromResult(...). I also had some *handler decorators in place (like for logging) and as you can imagine these also needed to be changed.
For now, i decided to make 'everything' potentially await-able, and got into the habit of using await in conjunction with my dispatcher (finds handler in ninject kernel and calls handle on it).
I went async all the way, also in my webapi/mvc controllers, with few exceptions. In those rare cases i use Continuewith(...) and Wait() to wrap things in a synchronous method.
Another, related frustration i have is that MR recommends to name methods with the *Async suffix in case thay are (duh) async. But as this is an implementation decision i (for now) decided to stick with Handle(...) rather than HandleAsync(...).
It is definitely not a satisfactory outcome and i'm also looking for a better solution.

WCF Client Instantiation

I have a mvc controller class that uses a WCF service(WSHttpBinding), sometimes multiple calls within one http request, and want to know how expensive it is to create a client for that service. Is it ok to create an instance of the client for every call or should I create a member variable in the class?
public class RingbacksController : Controller
{
private void LoadContactsIntoViewData(int page)
{
RingbackServiceClient client = new RingbackServiceClient();
...
client.Close();
}
private void LoadGroupsIntoViewData(int page)
{
RingbackServiceClient client = new RingbackServiceClient();
...
client.Close();
}
}
or
public class RingbacksController : Controller
{
private RingbackServiceClient client = new RingbackServiceClient();
private void LoadContactsIntoViewData(int page)
{
...
client.Close();
}
private void LoadGroupsIntoViewData(int page)
{
...
client.Close();
}
}
Creating the client is usually not an awful expensive operation - so you should be fine instantiating it whenever you need it (as Steven mentioned, too - if it's faulted due to an error, you'll need to do that anyway).
Should you be using a ChannelFactory to create the channel (that's one of the ways to do it), creating the ChannelFactory on the other hand is a pretty heavyweight and time-intensive operation, so it would be a good idea to hang on to a ChannelFactory instance for as long as you can.
Marc
In the past, I've created a new instance of the ChannelFactory<> and client/proxy for every call to the WCF service. I haven't had any problems with it, especially not for performance. The application I wrote was deployed on an internal company network (local LAN) where about 30 Windows Forms clients would connect to my WCF service.
Have a look at the following question Where to trap failed connection on WCF calling class? and my answer to it. Its basically a wrapper class which handles client/proxy instantiation and does a lot of necessary error handling to overcome certain shortcomings in the design of WCF (more info in the linked question).
You could re-write it or wrap it further in another factory, so that you can cache the ChannelFactory and client/proxy if you are worried about performance. I have "heard" that its a bad idea to cache the ChannelFactory or client/proxy - however, I am open to correction here.
Should you decide to go with a member, please keep in mind that once it gets faulted, all calls afterwards will fail. As for whether it's worth it, I suggest benchmarking.

Resources