I need to manipulate with data in separated thread in orchard cms.
Problem is when request ends session and services are disposed.
What is the best way to create db session, or how to manipulate with data after request finish?
EDIT:
I am trying something like this code
var builder = new ContainerBuilder();
builder.RegisterGeneric(typeof(Repository<>)).As(typeof(IRepository<>)).InstancePerLifetimeScope();
//builder.RegisterInstance(_shellSettings = new ShellSettings { Name = ShellSettings.DefaultName });
builder.RegisterType<TransactionManager>().As<ITransactionManager>().InstancePerLifetimeScope();
builder.RegisterType<SessionFactoryHolder>().As<ISessionFactoryHolder>().InstancePerLifetimeScope();
But i don't know what exactly to register, it throws me error when resolving repository.
Spawning threads on a web server is bad, it reduces its ability to serve many requests simultaneously. You should consider offloading your task to some other process, like a windows service, communicating through MSMQ by example.
Otherwise consider letting your task instantiates and disposes off the services and sessions it needs itself, instead of using those bounded to the request life-cycle. You may need to setup for this a dedicated dependency resolver letting the task explicitly control the lifetime of the objects it requests to the dependency resolver.
Related
I am developing an application with ZF2.5. I need to make an SSE (Server-Sent Events) module, but I can't manage to do that using a controller, it does not keep my connection alive (of the type: text/event-stream). So I am doing this in a separate php file, but I need authentication on that, and needed to reach Zend's service manager from this file "outside" the Zend environment.
Is it posible? Any suggestions?
Yes, you can do this from within ZF2, but it is not easy. The base of SSE is the connection is kept open. So you need somehow a while(true) or similar in php to keep the process running.
A controller is, when the function is done, terminated and the response is sent. You have to get this logic into a controller. Next, the response handler buffers all output in ZF2 and then sends all data at once. You need to reprogram the ZF2 output buffering flow, so you can send data directly from your controller without the buffering. Because otherwise, you're while(true) loop never sends data, only when you break the loop.
So the short answer: almost anything is possible in ZF2, including your needs. But it is not that straightforward.
The alternative is to load the service manager in your stand-alone script. This is also perfectly possible. Using the application config and merged other configs, you need to build your complete config and provision the SM with it. Then when instantiated, you can fully utilize its system.
Also here, only instantiating the SM can be hard. Easier here is to instantiate the application and grab the SM from it:
$app = Zend\Mvc\Application::init(include 'config/application.config.php');
$sm = $app->getServiceManager();
Note you don't `run()' the app, only bootstrap it!
From One DbContext per web request... why?
My understanding is that a DbContext instance should not be shared across concurrent web request, so definitely not across threads.
But how about sharing it across non-concurrent web requests?
Due to thread agility (What is the meaning of thread-agility in ASP.Net?), am I right that a thread can handle more than one web request before it dies?
If so, is it safe to dependency inject a DbContext instance for each thread?
Reason for this is I'm using Unity, which does not include per-request lifetime option.
From MVC, EF - DataContext singleton instance Per-Web-Request in Unity , I think I could use a custom LifetimeManager; I'm just wondering if it is safe and sufficient to use PerThreadLifetimeManager.
is it safe to dependency inject a DbContext instance for each thread?
It depends. If your idea is to have one DbContext per web request, and the consistency of your application depends on it, having one DbContext per thread is a bad idea, since a single web request can still get multiple DbContext instances. And since ASP.NET pools threads, instances that are cached per thread, will live for the duration of the entire application, which is very bad for a DbContext (as explained here).
On the other hand, you might be able to come up with a caching scheme that ensures that a single DbContext is used for a single web request, and is returned to a pool when the request is finished, so other web requests could pick it up. This is basically the way how connection pooling in .NET works. But since DbContext instances cache data, that data becomes stale very quickly, so even if you are able to come up with a thread-safe solution, your system still behaves in an inconsistent way, since at some seemingly random moments, old data is shown to the user, while in a following request new data is shown.
I think it is possible to clear the DbContext's cache at the beginning of a web request, but that would be basically be the same as creating a new DbContext for that request, but with the downside of much slower performance.
I'm just wondering if it is safe and sufficient to use PerThreadLifetimeManager.
No, it isn't safe because of the reasons described above.
But it is actually quite easy to register a DbContext on a per-web request basis:
container.Register<MyApplicationEntities>(new InjectionFactory(c => {
var context = (MyApplicationEntities)HttpContext.Current.Items["__dbcontext"];
if (context == null) {
context = new MyApplicationEntities();
HttpContext.Current.Items["__dbcontext"] = context;
}
return context;
}));
If you will use multiple DbContext per request you can end up with a multi threaded application with high possibility loosing data integrity. One web request can be viewed as one transaction ; but with PerThreadLifeTimeManager you would have multiple distinct transactions which aren't related but maybe they should. For example posting a form with many data can end up with saving multiple database table and with 2 or more independent context can happen that one insert succeeds but another fails and you could have inconsistent data.
The other important thing is that the ASP.NET infrastructure uses thread pooling so every started thread will be reused after the request has finished and if something went wrong in one request it can affect another one. That is why not recommended to use any Thread-LocalStorage (static in the thread) in threadpool environment because we can not control the threads' lifetime.
I have an ASP.NET MVC application that uses NHibernate to persist data into a SQL Server database.
There are cases where I want to save an entry into a database (initially triggered by a call into an action method on a controller) but there's no need to block the caller.
Is it "safe" to try to implement a fire-and-forget mechanism into the database that will put the database call into a Task and then invoke it on the background so control can return immediately to the caller? (OR accomplish the same thing with BackgroundWorker or the "async/await" keywords) I need a solution where NHibernate will not get tripped up by ASP.NET trying to clean up its ISession, which is per-request. I'm using Autofac for lifetime management on the session. I assume that the database operation would have a slightly longer lifetime than the web request itself, and I'm not sure how smoothly that would work.
It is not safe to do this; I have a blog post on the subject. The problem is that when you have no requests in progress, it is possible that your entire AppDomain can be torn down. Also, consider what would happen if the database insert failed for some reason? If you return early, then there's no way to notify the client of an error.
A reliable solution must store the data in some kind of persistent place before returning success to the caller. This can be directly in the database, or in a queue of some kind (to be later processed by an independent worker).
I'm planning to build a quite large application (large in term of concurrent user / number of request, not in term of features).
Basically, I'll have a service somewhere, that is waiting for commands execute them, and acknowledge the completion later. This service will use a service bus to communicate, making the execution eventual, before a acknowledge message is issued.
The consumers of this service can be any kind of application (WPF, SL, ...) but my main (and first) client will be an asp.net MVC application + WebApi (.Net 4.5) or MVC only (.Net 4.0) with ajax controller actions.
The web application will be relying on Ajax call to keep a user friendly responsive application.
I'm quite new to such full blown async architecture, and I'm having some questions to avoid future headache :
my web api calls can take some amount of times. How should I design properly the api to support long running operations (some kind of async?). I've read about the new async keyword, but for the sake of knowledge, I'd like to understand what's behind.
My calls to the service will consist is publishing a message and wait for the ack message. If I wrap this in a single method, how should I write this method? Should I "block" until the ack is received (I suppose I shouldn't)? Should I return a Task object and let the consumer decide?
I'm also wondering if SignalR can help me. With signalR, I think I can use a real "fire and forget" command issuing, and route up to the client to ack message.
Am I completely out of subject, and should I take another approach?
In term of implementation details / framework, I think I'll use :
Rabbitmq as messaging system
Masstransit to abstract the messaging system
asp.MVC 4 to build the UI
Webapi to isolate command issuing out of UI controllers, and to allow other kind of client to issue commands
my web api calls can take some amount of times.
How should I design properly the api to support long
running operations (some kind of async?).
I'm not 100% sure where you're going. You ask questions about Async but also mention message queuing, by throwing in RabbitMQ and MassTransit. Message queuing is asynchronous by default.
You also mention executing commands. If you're referring to CQRS, you seperate commands and queries. But what I'm not 100% about is what you're referring to when mentioning "long running processes".
When you query data, the data should already be present. Preferably in a way that is needed for the question at hand.
When you query data, no long-running-process should be started
When you execute commands, a long-running-processes can be started. But that's why you should use message queuing. Specify a task to start the long running process, create a message for it, throw it onto the queue, forget about it altogether. Some other process in the background will pick it up.
When the command is executed, the long-running-process can be started.
When the command is executed, a database can be updated with data
This data can be used by the API if someone requests data
When using this model, it doesn't matter that the long-running-process might take up to 10 minutes to complete. I won't go into detail on actually having a single thread take up to 10 minutes to complete, including locks on database, but I hope you get the point. Your API will be free almost instantly after throwing a message onto the queue. No need for Async there.
My calls to the service will consist is publishing a message and wait for the ack message.
I don't get this. The .NET Framework and your queuing platform take care of this for you. Why would you wait on an ack?
In MassTransit
Bus.Instance.Publish(new YourMessage{Text = "Hi"});
In NServiceBus
Bus.Publish(new YourMessage{Text = "Hi"});
I'm also wondering if SignalR can help me.
I should think so! Because of the asynchronous nature of messaging, the user has to 'wait' for updates. If you can provide this data by 'pushing' updates via SignalR to the user, all the better.
Am I completely out of subject, and should I take another approach?
Perhaps, I'm still not sure where you're going.
Perhaps read up on the following resources.
Resources:
http://www.udidahan.com/2013/04/28/queries-patterns-and-search-food-for-thought/
http://www.udidahan.com/2011/10/02/why-you-should-be-using-cqrs-almost-everywhere%E2%80%A6/
http://www.udidahan.com/2011/04/22/when-to-avoid-cqrs/
http://www.udidahan.com/2012/12/10/service-oriented-api-implementations/
http://bloggingabout.net/blogs/dennis/archive/2012/04/25/what-is-messaging.aspx
http://bloggingabout.net/blogs/dennis/archive/2013/07/30/partitioning-data-through-events.aspx
http://bloggingabout.net/blogs/dennis/archive/2013/01/04/databases-and-coupling.aspx
my web api calls can take some amount of times. How should I design
properly the api to support long running operations (some kind of
async?). I've read about the new async keyword, but for the sake of
knowledge, I'd like to understand what's behind.
Regarding Async, I saw this link being recommended on another question on stackoverflow:
http://msdn.microsoft.com/en-us/library/ee728598(v=vs.100).aspx
It says that when a request is made to an ASP .NET application, a thread is assigned to process the request from a limited thread pool.
An asynchronous controller action releases the thread back to the thread pool so that it is ready to accept addtitional requests. Within the action the operation which needs to be executed asynchronously is assigned to a callback controller action.
The asynchronous controller action is named using Async as the suffix and the callback action has a Completed suffix.
public void NewsAsync(string city) {}
public ActionResult NewsCompleted(string[] headlines) {}
Regarding when to use Async:
In general, use asynchronous pipelines when the following conditions
are true:
The operations are network-bound or I/O-bound instead of CPU-bound.
Testing shows that the blocking operations are a bottleneck in site performance and that IIS can service more requests by using
asynchronous action methods for these blocking calls.
Parallelism is more important than simplicity of code.
You want to provide a mechanism that lets users cancel a long-running request.
I think developing your service using ASP .NET MVC with Web API and using Async controllers where needed would be a good approach to developing a highly available web service.
Using a message based service framework like ServiceStack looks good too:
http://www.servicestack.net/
Additional resources:
http://msdn.microsoft.com/en-us/magazine/cc163725.aspx
http://www.codethinked.com/net-40-and-systemthreadingtasks
http://dotnet.dzone.com/news/net-zone-evolution
http://www.aaronstannard.com/post/2011/01/06/asynchonrous-controllers-ASPNET-mvc.aspx
http://channel9.msdn.com/Events/TechDays/Techdays-2012-the-Netherlands/2287
http://www.dotnetcurry.com/ShowArticle.aspx?ID=948 // also shows setup of performance tests
http://www.asp.net/mvc/tutorials/mvc-4/using-asynchronous-methods-in-aspnet-mvc-4
http://visualstudiomagazine.com/articles/2013/07/23/async-actions-in-aspnet-mvc-4.aspx
http://hanselminutes.com/327/everything-net-programmers-know-about-asynchronous-programming-is-wrong
http://www.hanselman.com/blog/TheMagicOfUsingAsynchronousMethodsInASPNET45PlusAnImportantGotcha.aspx
In a java web app, I need to call a remote soap service, and I'm trying to use a CXF 2.5.0-generated client. The soap service is provided by a particular ERP vendor, and its wsdl is monstrous, thousands of types, dozens of xsd imports, etc. wsdl2java generates the client ok, thanks to the -autoNameResolution flag. But at runtime it retrieves the remote wsdl twice, once when I create the service object, and again when I create a port object.
MyService_Service myService = new MyService_Service(giantWsdlUrl); // fetches giantWsdl
MyService myPort = myService.getMyServicePort(); // fetches giantWsdl again
Why is that? I can understand retrieving it when creating myService, you want to see that it matches the client I'm currently using, or let a runtime wsdl location dictate the endpoint address, etc. But I don't understand why asking for the port would reload everything it just went out on the wire for. Am I missing something?
Since this is in a web application, and I can't be sure that myPort is threadsafe, then I'd have to create a port for each thread, except that's way too slow, 6 to 8 seconds thanks to the monstrous wsdl. Or add my own pooling, create a bunch in advance, and do check-outs and check-ins. Yuck.
For the record, the JaxWsProxyFactoryBean creation route does not ever fetch the wsdl, and that's good for my situation. It still takes a long time on the first create(), then about a quarter second on subsequent create()s, and even that's less than desirable. And I dunno... it sorta feels like I'm under the hood hotwiring the thing rather than turning the key. :)
Well, you have actually answered the question yourself. Each time you invoke service.getPort() the WSDL is loaded from remote site and parsed. JaxWsProxyFactoryBean goes absolutely the same way, but once the proxy is obtained it is re-used for further invocations. That is why the 1st run is slow (because of "warming up"), but subsequent are fast.
And yes, JaxWsProxyFactoryBean is not thread-safe. Pooling client proxies is an option, but unfortunately will eat a lot of memory, as JAX-WS runtime model is not shared among client proxies; synchronization is perhaps better way to follow.