Groovy/Grails, thread safety and closures - grails

I'm investigating what looks like a race condition in my Grails application. I occasionally see what I think can only be the result of 2 different threads interfering with one another when the application is under a lot of load.
The offending code builds a query using a closure (which is defined at class-level, like a method) which dynamically adds query parameters based on the properties of the domain class. On the surface the code looks fine (to me, as a Java developer), however I stumbled across this comment regarding controller scope in the Grails docs (emphasis mine):
prototype (default) - A new controller will be created for each request (recommended for actions as Closure properties)
session - One controller is created for the scope of a user session
singleton - Only one instance of the controller ever exists (recommended for actions as methods)
Reference
So my question is: what are the implications of using a closure (instead of a method) in a service which is a singleton?
EDIT:
The code is part of the grails quick search plugin:
https://github.com/tadodotcom/grails-quick-search/blob/master/grails-app/services/org/grails/plugins/quickSearch/QuickSearchService.groovy
There are 2 closures, aliasBuilder and propertyQueryBuilder which I think may not be thread-safe.

Should you still need a solution, I just happened to fork that plugin and I stumbled upon the same problem (in the same concurrent scenario).
The long story short, helped by this post, I applied the #Synchronized annotation to the search method of QuickSearchService service and the exception (NPE btw) went away.
HTH

Related

InRequestScope is not disposing

I'm using Ninject in an ASP MVC project for database binding. In "private static IKernel CreateKernel()", I'm binding a database object like below:
kernel.Bind<IDbSession>().ToProvider<IDbSession>(new NhDbSessionProvider()).InRequestScope();
This works almost exactly as intended, but some pages on the site use AJAX calls to methods on the controller and it seems like every one of these calls opens a SQL connection and doesn't dispose of it when the controller returns. Eventually this causes the NumberOfConnections on the SQL database to exceed the max and the site throws errors that connections aren't available.
I'm pretty new to Ninject and taking over an existing project trying to make this work without doing major changes. Any ideas what I can do to get these objects to dispose? It seems like they should be automatically doing this already from what I'm reading, but maybe I just don't understand.
If I need to share more code let me know.
Disposable instances are Disposed at end of request processing, according to the documentation.
However, the documentation for InRequestScope has a section on Ensuring Deterministic Dispose calls in a Request Processing cycle.
Specifically:
Use the Ninject.Web.Common package
Manually registering OnePerRequestModule in web.config (e.g. if you're
not using a Ninject.Web.MVC* NuGet Package)
It is perhaps the case that your instances are being/will be Disposed, but just not at the time you're expecting them to be.
Seems someone commented out this line for whatever reason (it wasn't me). Adding it back resolves the problem.
DynamicModuleUtility.RegisterModule(typeof(OnePerRequestHttpModule));

Multitenancy, help understand concept of WorkContext

While trying to understand how a multitenant environment works, I've came up to the concept of WorkContext.
Introduction:
In a multitenant environment Tenants are Clients which share similar functionality, and run under a single instance of a ROOT application. In order to be able to add tenant specific functionality I came across a conclusion that Dependency Injection is the right solution for my case.
Each tenant should have it's own IoC Container, in order to be able to Resolve its dependencies at runtime.
But when trying to implement the theory I have some troubles with the wrapping out all tenant specific data.
While digging the internet it seems that there must exist some sort of a TenantContext. So each tenant has it's own isolated Context.
The problem is that I don't understand the true LifeCycle of such a Context.
Question 1:
What is the lifecycle diagram of a tenant specific WorkContext, Where should I store it, When it should be created/disposed ?
NOTE: If the question 1 is provided, there is no need to answer the above one.
Suddenly I've found Orchard Project which seems to be a true masterpiece. Inside Orchard, the Context i'm talking about is called WorkContext.
I'm trying to understand the concept of WorkContext in Orchard Project. As far as I understand, here are some thoughts about WorkContext:
The WorkContext is a per-request lifetimeobject.
It is stored in HttpContext.Items (also there is a thread static implementation..).
It wraps the tenant IoC scope (ShellContext -> ShellContainer).
It is accessed through IWorkContextAccessor.
What I don't understand is:
Question 2:
Why do we need to include IWorkContextAccessor instance in route's DataTokens like this: routeData.DataTokens["IWorkContextAccessor"] = _workContextAccessor; ? Is this really necessary?
Kind of big question :-).
Firstly, WorkContext is more or less an abstraction of the HTTP context. A WorkContext lives as long as its work context scope lives (see IWorkContextAccessor that you can use to create work context scopes) so actually you can have multiple work contexts per request and you can have a work context independently of a request too (this happens in background tasks).
Such work contexts are thus externally managed contexts and thus somehow have to "travel" along with their scope until the latter is terminated: in Orchard the WC is either carried in the HTTP Context or in a thread static field (what is not good enough and should be changed).
Thus basically a work context scope is the lowest dependency scope commonly used. It also has a parent, the shell's scope: this is the shell context (or more precisely, its lifetime scope). You can access a shell's (what is most of the time equal to a tenant) context through IOrchardHost.GetShellContext(). Work context scopes are actually created from the shell context's lifetime scope.
This also has a parent BTW that is the application-wide HostContainer.
Most of the time you don't have to manage the WC yourself since the ambient WC around requests and background tasks are managed for you.
Regarding your second question: I'm not entirely sure about this but AFAIK passing the WCA to the routeData just serves as a workaround to be able to access it (and thus, Orchard services) from strange places like HTML helpers and attributes that are not resolved through the DI container.
Edit: also added this to the Dojo Library, redacted and improved: http://orcharddojo.net/orchard-resources/Library/Wiki/WorkContext

ASP.NET Mvc - AutoMapper Best Practice - Performance

I've not looked at the AutoMapper source code yet but was just about to make some changes to an API controller in my solution and had a thought.
The way I like to code is to keep my controller methods as concise as possible, for for instances I make use of a generic Exception attribute to handle try{}catch{} scenarios.
So only the code that is absolutely relevant to the controller action is actually in the action method.
So I just arrived at a situation where I need to create an AutoMapper map for a method. I was initially thinking that I would add this (as I have done previously) to the controller constructor so its available immediately.
However, as the controller grows following this pattern may introduce a lot unnecessary AutoMapper work depending on the controller action method which is invoked.
Considering controllers are created and destroyed per request this could get expensive.
What are some recommendations around this? Considering AutoMapper is accessed statically I was wondering if it's internals live beyond the request lifetime and it internally checks for an existing map before creating a new one each time CreateMap() is invoked?
You should create your maps (CreateMap) once per AppDomain, ideally when this domain starts (Application_Start).

What is the connection between validate() and hasErrors()

This question comes from the problem of another question of mine. In that question, I come across a situation that hasErrors() function doesn't work for non-persistent domain class, even after all the things I did following the instruction, part 7.5 .
Following Victor's way, I fixed the problem by calling validate(), but I don't understand why it works. The Grails documents seem to say nothing about you should call a validate() before hasErrors() function. How could this happen?
It does make sense to me that validate would need to be called before asking an object whether it hasErrors (or save for proper domain objects, which calls validate under the covers). Validate in this context means "check whether this object is valid and indicate any errors if not".
Alternatively the GORM implementation would have to call validate every time any change is made to an object, which to me would be less desirable behaviour, as it might involve lots of work being done often and unnecessarily (some of those constraints could involve a lot of work).
The beginning of section 7.2 states pretty clearly "To validate a domain class you can call the validate method on any instance". It also states that "within Grails there are essentially 2 phases of validation, the first phase is data binding which occurs when you bind request parameters onto an instance such as... At this point you may already have errors in the errors property due to type conversion (such as converting Strings to Dates). You can check these and obtain the original input value using the Errors API. ... The second phase of validation happens when you call validate or save. This is when Grails will validate the bound values againts the constraints you defined."
The documentation for hasErrors also mentions this. You can access this by finding the method call in the navigation frame on the left, when you are on the documentation site. I would always recommend looking on these as well as the more descriptive user guide pages, as they often give a little more detail.
Here's the page for the validate method too.
I've never had a problem calling validate directly - it's very clear to me and I can choose the point where all the work is done and I'm ready for the validation to take place. I can't see an option to change this behaviour anywhere, but if you wanted validate to be called automatically or under certain conditions, you could perhaps use some Groovy meta programming magic by maybe adding invokeMethod to the class and have it call validate before passing certain calls on. Have a look here and here.
(Not sure I would recommend that though! And bear in mind your class would now be dependent on being used within the GORM validation framework, as that validate method might not otherwise exist).

Random DataReader errors and thread-specific DataContext allowing for changing DataLoadOptions

My .NET MVC project has reached the stage of testing with multiple users and I am getting seemingly random errors (from any screen in the site) relating to Linq2Sql DataReader issues, such as:
'Invalid attempt to call FieldCount when reader is closed.' and
'ExecuteReader requires an open and available Connection. The connection's current state is closed.'
These errors are also appearing less frequesntly during single user testing if a link is double-clicked or when multiple tabs in the browser, or different browsers are refreshed simultaneously.
I'm wondering if these issues are down to DataContext threading problems.
At the moment I am using the repository approach with seperate repositories for each business process. Each repository class fires up a DataContext instance in its contructor and this is used by most methods in the repository.
However some methods are updating the DataLoadOptions to force eager loading of view data so these methods create their own instance of the DataContext.
Also on some screens there is information from multiple business objects displayed so there may be 2 or 3 repositories involved in a single request. Consequesntly there could be many seperate DataContext instances created per request.
I've tried to enforce an eager loading approach throughout using DataLoadOptions where necessary and applying ToList() on query results to make sure everything is loaded up front (not waiting until the view is rendered) - so each DataContext should only be open for a fairly short period.
As the errors appear to be related to multiple threads reusing the same DataContext(s), I'm thinking of implementing a single DataContext per request along the lines of Rick Strahl's Thread Specific DataContextFactory (http://www.west-wind.com/weblog/posts/246222.aspx) or a simpler “unit of work datastore” approach as in Steve Sanderson's example (http://blog.stevensanderson.com/2007/11/29/linq-to-sql-the-multi-tier-story/).
But then there's the issue of DataLoadOptions to resolve. I could create additional thread-specific DataContexts but that seems to be getting away from my goal of using a single DataContext per request. So I'm looking at either reusing the same DataContext but temporarily changing the LoadOptions for some methods as in Kevin Watkin's example (http://www.mrkwatkins.co.uk/Blog/2010/05/)
or scrapping the standard DataLoadOptions approach in favour of using preload stored procedures to pre-populate EntityRefs as discussed by Roger Jennings in Visual Studio magazine (http://visualstudiomagazine.com/Articles/2007/11/01/Optimize-LINQ-to-SQL-Performance.aspx?Page=3)
So my question is has anyone experienced similar random DataReader issues, how did you resolve them, and am I likely to resolve the problem by implementing those solutions I've linked to?
As always time is tight - so I don't want to spend it implementing a solution if the problem is actually somewhere else. Any help would be greatly appreciated!
PS. I'm afraid this is quite a high-level question and I haven't included any specific code examples as I'm not sure where the problem actually lies.

Resources