I am just starting porting an application to ASP.net MVC and I have an object holding application state (it keeps track of certain processes running on the machine, starting and stopping as necessary and sending/receiving MSMQ message).
Where should I keep this object? In my current application (based on HttpListener) it is a singleton, however I know singletons make testing difficult. It would be difficult to mock or test this object, at least in the context of the MVC application itself, and it has it's own set of tests outside the application anyway. However it may need to be replaced by a stub for testing.
The object needs to be made available to a number of controllers. Where should I store this object and how should I make it available to the controllers? I've never seen a case like this described in any ASP.net MVC examples I've seen.
UPDATE:
I guess I need to explain why I can't store this data in a database. First I must explain what the application does:
The application serves images that are generated dynamically by a number of "engines", which are processes running on the server, communicated to via MSMQ. Lets call the object I'm asking the question about the EngineManager. The process goes something like this:
The client POSTs an XML request to the server, giving the name of "engine" to be used, as well as a number of parameters describing the image.
The application checks the EngineManager to see if that engine is running. If not, it starts it.
The application posts an MSMQ message to the engine and waits for the response.
The application sends the generated image back to the client.
If at any point the engine shuts down or crashes, the application must be aware of that so that it can be restarted on the next request to that engine.
When the application shuts down, all engines are also shut down.
There are several controllers that handle these requests, each doing a slightly different job. All of them need to communicate with the same EngineManager, as it also needs to, in certain situations synchronise access to other resources.
As you can see, it's not your typical database-backed webserver.
You should pass the object to the constructor of each Controller instance, and the controller action methods should all use the object instance passed in to the Controller instance constructor.
The default ControllerFactory which ships with ASP.NET MVC will not allow you to do this. However, there are free addon frameworks (the one I like is Autofac) which do permit this style of programming.
If you want this object to be available to all users, i.e. it is not session specific, you could look at storing it in application state:
http://msdn.microsoft.com/en-us/library/bf9xhdz4(VS.71).aspx
However, application state has several disadvantages, as listed on the page linked above, so make sure these issues don't affect you before you go down that route. In general I steer clear from Applciation state and store application data in a backend DB. As you don't want to go down this route application state may be OK for you.
Related
I am writing MVC UI wrapper reusing legacy core libraries that were written down for desktop edition using Autofac for DI. The problem I am facing is, core libraries are working with Lifetime scope that I can't change while MVC requires InstancePerRequest.
So what happens is, in MVC, if I register my services for InstancePerRequest scope, they get disposed by core libraries before request completes. It makes MVC application unhappy.
I tried using LifeTimeScope for all services in MVC app too. Since Lifetime scope is shorter than Request life, it appears to work in MVC.
Is there any downside in this approach?
Note: In legacy code all the time services are being resolved manually, instead of being injected through constructor. Like:
using (var scope = IocContainer.BeginLifetimeScope())
{
var service = scope.Resolve<IMyService>();
return service.FindAll();
}
MVC will work with InstancePerLifetimeScope for services as noted in the documentation about sharing registrations across apps that have and apps that don't have request scopes.
I think there are going to be potentially two gotchas in your approach to creating your own lifetime scope. Whether you can live with them is very much app specific so you'll have to judge for yourself.
Problem 1: Early Disposal
In your example you show a factory or service IMyService being resolved, doing some work, and returning that work. At the end of the using statement the owning lifetime scope is getting disposed. That means IMyService will be disposed (if it's IDisposable) and any dependencies that IMyService requires will also be disposed. In the case of things like database contexts or connections, that well could mean the return value becomes invalid because you won't be able to update the values or read additional data against a disposed connection.
Problem 2: Singleton/Sharing Issues
Lifetime scopes are sometimes used to isolate units of work or sets of components that need shared context. For example, in MVC you only have one instance of the controller for the whole request - no matter how many times you resolve the controller object, for that request it'll be the same instance. You might see a similar thing with database connections - one connection from the pool allocated for an entire request lifetime.
By creating your own lifetime scope you are also creating a sort of logical unit of work. Any dependencies for IMyService will not be shared with the rest of the MVC request. In fact, it's more like that tiny lifetime scope is its own request or its own unit of work. No overlap.
General Resolution
As noted in the doc I linked to earlier, register things as InstancePerLifetimeScope if they need to be used in both MVC and non-MVC contexts and just let the MVC request semantics handle spinning up and disposal of scopes if possible.
If that won't work, it'll be up to you and your app code to figure out if you can live with the issues here or if you need to address them. If you need to address them, that, too, will be app specific so there isn't "guidance" to provide - you're on your own for that.
We are using both Angular and Rails applications at our company. Is there a way to mock a web page to test the UI? Essentially I want to jump midway into an application, so I don't have to take time logging in, creating our object, tweaking the object, then finally getting to what I can test.
I was looking at something like MSL, but am unsure if it's really what I need.
Turns out MSL is what I wanted. My application depends on other applications for communication. In order to isolate my isolation, I can use MSL to mock the responses from and the requests to my dependent applications fairly easily. In my configuration I switch my external dependency to localhost:8001 (or whatever port msl is running on).
Edit
I rewrote MSL, specifically for my purposes and made it a little easier. The server is a node program, and the client is only in ruby as of now.
mobe-server
mobe-client
I have a application that has been programmed with MVC/EF Code First. It does a lot of server side processing and is pretty resource intensive.
I know how to set up load balancing, but, I want to know if scaling an EF application is as simple as provisioning a new server, deploying the application and pointing to the DB cluster - or are there any issues I will face with regards to multiple EF applications hitting the same database server?
I can't seem to find any advice/guides for this and I am worrying I made the wrong choice by choosing EF over something simpler/more straight forward!
... issues ... regards to multiple EF applications hitting the same database server?
Rewind a bit to the fact that your application is an ASP .NET MVC based application. Having multiple instances of it is probably going to raise the spectre of state management.
MSDN has a pretty good introduction to why this is an issue:
HTTP is a stateless protocol. This means that a Web server treats each HTTP request for a page as an independent request. The server retains no knowledge of variable values that were used during previous requests. ASP.NET session state identifies requests from the same browser during a limited time window as a session, and provides a way to persist variable values for the duration of that session. By default, ASP.NET session state is enabled for all ASP.NET applications.
Alternatives to session state include the following:
Application state, which stores variables that can be accessed by all users of an ASP.NET application.
This point is an extremely common way of storing state, but breaks down when there's multiple instances of an application involved (the state is "visible" to only one of the instances).
Typically this is worked around by using either the StateServer or SQLServer value of SessionStateMode. The same article provides a pretty good summary of each option (emphasis mine).
StateServer mode, which stores session state in a separate process called the ASP.NET state service. This ensures that session state is preserved if the Web application is restarted and also makes session state available to multiple Web servers in a Web farm.
SQLServer mode stores session state in a SQL Server database. This ensures that session state is preserved if the Web application is restarted and also makes session state available to multiple Web servers in a Web farm.
If your application is stateless, this is a moot point.
I am worrying I made the wrong choice by choosing EF
As far as issues with multiple instances of your application accessing a database go, you're going to have issues with any sort of data access technology.
Here's the basic scenario: let's say your application sends welcome emails to users on a schedule.
Given the table Users:
UserId | Email | WelcomeLetterSent
-------+-----------------+------------------
1 | user#domain.com | 0
And some psuedo-code:
foreach (var user in _context.Users.Where(u => !u.WelcomeLetterSent))
{
SendEmailForUser(user);
user.WelcomeLetterSent = true;
}
_context.SaveChanges();
There's a race condition where both instance one and instance two of your application might simultaneously evaluate _context.Users.Where(...) before either of them has the chance to set WelcomeLetterSent = true and call SaveChanges. In this case, two welcome emails might get sent to each user instead of one.
Concurrency can be an insidious thing. There's a primer on managing concurrency with the Entity Framework over here, but this is only the tip of the iceberg.
The answer to your question? It depends on what your application does :)
On top of that, I ideally want to build some "extra" support applications that hook in to the same DB... and, I am just not sure how EF will handle multiple apps to the same DB....
If your application can tolerate multiple instances of itself accessing one database, then it's usually not a stretch to make these "support applications" play nicely. It's not much different whether the concurrency is from multiple instances of one application or multiple applications with one instance each.
apologies in advance for this question being dumb, or previously covered. I have researched far and wide but have not found any resources on WCF/ Windows Services that cover this question.
I have a managed Windows Service which is working nicely. Every n (>5) seconds it checks on the status (e.g. memory consumption) of some processes and other Windows services and also does some database logging and raises events where necessary.
I intend to make an ASP.NET website that would allow users to query the status of the processes that the Windows Service is monitoring. Having researched the options it looks like the up-to-date method would be to use a WCF Service, hosted in the Windows Service, to act as intermediary between the ASP.NET website and the Windows Service. Such that, a user could request through the browser a snapshot of the current status of whatever set of processes the Windows Service was monitoring, and have this request and subsequent response relayed through the WCF service (using named pipes, I think).
So, my difficulty is that there a set of methods and events in the Windows Service for which a single root object exists (let's say MonitorObject). I don't see how the ServiceHost can be instantiated with the reference to MonitorObject so that the WCF Service can call the methods in the Windows Service. I am thinking that perhaps I need to make the Monitor object a shared (I am VB'ing) member of the Windows Service class (that contains OnStart and OnStop) and make all the events shared so that the WCF Service can just access the WindowsService.SharedMonitorObject without needing to be passed the object....
However, I am lost in the subject and am seeking any advice on how best to proceed.
Thanks in advance.
I think you're going down the right track. I wouldn't necessarily make the entire MonitorObject shared, but you might put a shared method in that object that will return the single root object to the caller.
There is a design pattern called the Singleton Pattern that will help you with this. Jon Skeet has written an excellent article on some of the things to be aware of when using this pattern in .NET. His article uses C# for the examples, but here's a SO question referencing this pattern using VB.
While it's unclear from your description, my guess is that your Windows Service is essentially single-threaded right now. Just keep in mind that once you add the WCF service, you'll need to make the methods that it references thread-safe.
I'm using ASP.NET MVC and I am trying to separate a lot of my logic. Eventually, this application will be pretty big. It's basically a SaaS app that I need to allow for different kinds of clients to access. I have a two part question; the first deals with my general design and the second deals with how to utilize in ASP.NET MVC
Primarily, there will initially be an ASP.NET MVC "client" front-end and there will be a set of web-services for third parties to interact with (perhaps mobile, etc).
I realize I could have the ASP.NET MVC app interact just through the Web Service but I think that is unnecessary overhead.
So, I am creating an API that will essentially be a DLL that the Web App and the Web Services will utilize. The API consists of the main set of business logic and Data Transfer Objects, etc. (So, this includes methods like CreateCustomer, EditProduct, etc for example)
Also, my permissions requirements are a little complicated. I can't really use a straight Roles system as I need to have some fine-grained permissions (but all permissions are positive rights). So, I don't think I can really use the ASP.NET Roles/Membership system or if I can it seems like I'd be doing more work than rolling my own. I've used Membership before and for this one I think I'd rather roll my own.
Both the Web App and Web Services will need to keep security as a concern. So, my design is kind of like this:
Each method in the API will need to verify the security of the caller
In the Web App, each "page" ("action" in MVC speak) will also check the user's permissions (So, don't present the user with the "Add Customer" button if the user does not have that right but also whenever the API receives AddCustomer(), check the security too)
I think the Web Service really needs the checking in the DLL because it may not always be used in some kind of pre-authenticated context (like using Session/Cookies in a Web App); also having the security checks in the API means I don't really HAVE TO check it in other places if I'm on a mobile (say iPhone) and don't want to do all kinds of checking on the client
However, in the Web App I think there will be some duplication of work since the Web App checks the user's security before presenting the user with options, which is ok, but I was thinking of a way to avoid this duplication by allowing the Web App to tell the API not check the security; while the Web Service would always want security to be verified
Is this a good method? If not, what's better? If so, what's a good way of implementing this. I was thinking of doing this:
In the API, I would have two functions for each action:
// Here, "Credential" objects are just something I made up
public void AddCustomer(string customerName, Credential credential
, bool checkSecurity)
{
if(checkSecurity)
{
if(Has_Rights_To_Add_Customer(credential)) // made up for clarity
{
AddCustomer(customerName);
}
else
// throw an exception or somehow present an error
}
else
AddCustomer(customerName);
}
public void AddCustomer(string customerName)
{
// actual logic to add the customer into the DB or whatever
// Would it be good for this method to verify that the caller is the Web App
// through some method?
}
So, is this a good design or should I do something differently?
My next question is that clearly it doesn't seem like I can really use [Authorize ...] for determining if a user has the permissions to do something. In fact, one action might depend on a variety of permissions and the View might hide or show certain options depending on the permission.
What's the best way to do this? Should I have some kind of PermissionSet object that the user carries around throughout the Web App in Session or whatever and the MVC Action method would check if that user can use that Action and then the View will have some ViewData or whatever where it checks the various permissions to do Hide/Show?
What you propose will not work. Actions can be cached, and when they are, the action (and hence your home-rolled security) does not run. ASP.NET membership, however, still works, since the MVC caching is aware of it.
You need to work with ASP.NET membership instead of trying to reinvent it. You can, among other things:
Implement a custom membership provider or role provider.
Subtype AuthorizeAttribute and reimplement AuthorizeCore.
Use Microsoft Geneva/Windows Identity Foundation for claims-based access.
Also, I completely disagree with ChaosPandion, who suggests making structural changes in your code before profiling. Avoiding exceptions for "performance" reasons is absurd -- especially the idea that the mere potential to throw an exception for invalid users will somehow tank the performance for valid users. The slowest part of your code is likely elsewhere. Use a profiler to find the real performance issues instead of jumping on the latest micro-"optimization" fad.
The correct reason to avoid exceptions for authorizations is that the correct way to indicate an attempt at unauthorized access in a web app is to change the HTTP status code to 401 Unauthorized, not throwing an exception (which would return 500).
Define your authorisation requirements as a domain service so they are available to both the web and web service implementations.
Use an authorisation filter to perform your authorisation checks within the web application, this should be as simple as creating an auth request object and then passing it to your auth domain service.
If the authorisation fails, return the correct error - a 401 as indicated by Craig Stuntz.
ALWAYS authorise the action. If you can hide the link to unauthorised users - thats nice.
Simplify your views / view logic by writing a HtmlHelper extension method that can show / hide things based on a call to the auth domain service.
To use your authorisation service from the web service is simply a matter of constructing the auth request object from something passed in via the service message instead of from a cookie passed by the users browser.