I'm using Ninject in an ASP MVC project for database binding. In "private static IKernel CreateKernel()", I'm binding a database object like below:
kernel.Bind<IDbSession>().ToProvider<IDbSession>(new NhDbSessionProvider()).InRequestScope();
This works almost exactly as intended, but some pages on the site use AJAX calls to methods on the controller and it seems like every one of these calls opens a SQL connection and doesn't dispose of it when the controller returns. Eventually this causes the NumberOfConnections on the SQL database to exceed the max and the site throws errors that connections aren't available.
I'm pretty new to Ninject and taking over an existing project trying to make this work without doing major changes. Any ideas what I can do to get these objects to dispose? It seems like they should be automatically doing this already from what I'm reading, but maybe I just don't understand.
If I need to share more code let me know.
Disposable instances are Disposed at end of request processing, according to the documentation.
However, the documentation for InRequestScope has a section on Ensuring Deterministic Dispose calls in a Request Processing cycle.
Specifically:
Use the Ninject.Web.Common package
Manually registering OnePerRequestModule in web.config (e.g. if you're
not using a Ninject.Web.MVC* NuGet Package)
It is perhaps the case that your instances are being/will be Disposed, but just not at the time you're expecting them to be.
Seems someone commented out this line for whatever reason (it wasn't me). Adding it back resolves the problem.
DynamicModuleUtility.RegisterModule(typeof(OnePerRequestHttpModule));
Related
I have created a simple service that takes 4 parameters. When adding EntityManagerInterface as parameter I get max nest level error. The same happens even if pass the arguments manually (withought autowiring).
Interestingly enough when I remove the EntityManagerInterface parameter it all works fine. The problem is, I need the EntityManager in the service.
Any ideas where to look at?
For anyone suffering in the future from similar situation the problem was the following
The service that was requesting for EntityManager was getting injected in a doctrine lifecycle class. Apparently this causes an infinite recursion issue as doctrine is not really initialised at that point and it tries to initialise it.
Setting the service as lazy doesn't work as it's required in the constructor. Is there some way around it to keep the EntityManager dependency in the service and still use it in the doctrine lifecycle event class?
I am looking at the latest ASP.NET MVC4 internet application that uses SimpleMembership. I see the following that's placed inside an action filter:
public override void OnActionExecuting(ActionExecutingContext filterContext)
{
// Ensure ASP.NET Simple Membership is initialized only once per app start
LazyInitializer.EnsureInitialized(ref _initializer, ref _isInitialized, ref _initializerLock);
}
Can someone tell me why it's coded this way? Why not just put a call to the initialization code in global.asax which is run once every app start? Am I missing something?
I don't know why it is placed there. But, if you want your Membership functions execute well , then you will need to call that method once.So, if you call any Membership related method anywhere except in AccountController, then make sure you have first called LazyInitializer.
I believe the template used an attribute for database initialization so that the non-authenticated portions of the site would still work if the initialization failed.
For most practical purposes, it's best to just have this done in the Application_Start.
Putting the initialisation in an ActionFilter is a bit nicer for code reuse. You could put your controller in its own assembly (assuming you're using a DI container to find your controllers) and then reuse it within multiple projects. Since the initialisation is done on a filter of the controller, you wouldn't need to ensure it's registered in each project, since it will just get picked up automatically.
Another reason is lazy initialisation - if your site is mostly anonymous, then you don't need to have this initialisation routine called. Granted it probably doesn't do much, but it means that an initial loading delay only occurs on a page which requires logging in, rather than your home page, for example.
I'm wondering what is the right way to deal with dataTables that take input in a Hibernate/JPA world. As far as I can tell, one of the following three choices is causing the whole house of cards to fall apart, but I don't know which one is wrong.
Semi-automatic transaction and EntityManager handling via a custom JSF PhaseListener that begins and commits transactions around every request
Putting editing components inside a dataTable
Using request-scoped managed beans that fetch their data from a request-scoped EntityManager (with some help from PrettyFaces to set IDs on the request scoped beans from their URLs)
Backing a dataTable with a request-scoped bean instead of a view- or session-scoped bean.
I see an ICEfaces dataTable demo using JPA but they are both manually managing the transactions and not displaying editing components by default. You click on the row which causes an object to be nominated for editability and then when you hit "save" it manually reconnects the object to the new EntityManager before manually triggering a save. I see the click-to-edit function here as giving us a way to ensure that the right object gets reattached to the current session, and I don't know how one would live without something similar.
The impression I'm getting about the new ICEfaces 3.0 ace:dataTable (née PrimeFaces 2.0 dataTable) is that it is intended to be used in a View- or Session-scoped bean, but I don't see how one could get around StaleObjectState and/or LazyInitializationExceptions if one has model objects coming out of the DAO in request A and EntityManager A and then being modified or paged in by request B with EntityManager B.
I suppose it might work under Java EE through some kind of deep fu, but I don't have the luxury of upgrading us from Tomcat 6 to anything fancier right now (though it is my intent in the long run). We're also not about to start using Spring or Seam or whatever the other cool stuff is. ICEfaces is enough weird for us, probably too much weird honestly.
So to sum up, which of these is the wrong choice? The request-scoped entity manager, the request-scoped dataTable or using editing components inside a dataTable? Or is something else really at fault here?
If you'd ask me, the prime fault seems to be sticking to an almost bare Tomcat when your requirements seem to scream for something a little fancier. The mantra is normally that you use Tomcat when you don't need "all that that other stuff", so when you do need it, why keep using a bare Tomcat?
That said, the pattern really isn't that difficult.
Have a view scoped backing bean
Obtain the initial data in an #PostConstruct- (when there are no parameters like IDs) or PreRenderViewEvent method in combination with view parameters
Use a separate Service class that uses an entity manager to obtain and save the data
Make the entity manager "transaction scoped"
Without EJB/CDI/Spring:
Obtain a new entity manager from an entity manager factory for every operation.
Start a (resource local) transaction, do the operation, commit transaction, close entity manager.
Return the list of entities directly from your backing bean, bind the edit mode input fields of the table to the corresponding properties of the entity.
When updating a single row, pass the corresponding entity to the update method of your service. Apart from the overhead of getting an entity manager, starting the transaction etc, this basically only calls merge() on the entity manager.
Realize that outside the service you're working with detached entities all the time. There is thus no risk for any LazyInitializationExceptions. The backing beans need to be in view scope so the correct (detached!) entity is updated by JSF, which your own code then passes to the service, which merges it into the persistence context.
The flow for persisting is thus:
View state View scope Transaction scoped PC
Facelet/components Backing Bean Service
Strings ------> Detached entities --> Attached entities
(the flow for obtaining data is exactly the reverse)
Creating the Service this way is a little tedious and a kind of masochist exercise though. For an example app and just the two methods (get and update) discussed above it wouldn't be so bad, but for any sizable app this will quickly run out of hand.
If you are already adding JSF and JPA to Tomcat, just do yourself a favor and use something like TomEE. This is barely bigger than Tomcat (25MB vs 7MB) and contains all the stuff you're supposedly trying to avoid but in reality need anyway.
In case you absolutely can't upgrade your Tomcat installation (e.g. the product owner or manager thinks he owns the server instead of the developers), you might want to invest in learning about CDI. This can be easily added to your war (just one extra jar) and let's you abstract away lots of the tedious code. One thing that you also could really use is a JTA provider. This too can be added separately to your war, but the more of this stuff you add the better you'll be off by just using TomEE (or alternatives like GlassFish, Resin, JBoss, etc).
Also see this article, which covers various parts of your requirements: Communication in JSF 2.0
I'm looking for some smart ideas on how to quickly find all usage of session state within an existing asp.net (MVC) application.
The application in question was the subject of outsourced development, and has been running fine in production. But we recently realised that it's using InProc session state rather than (our preferred route) StateServer.
In a small scale test, we switched it over to StateServer, and all worked fine. However, when we deployed to production, we suddenly experienced a large number of errors. I'm not sure if these errors were caused by this change (they were actually complaining about database level problems), but removing the change allowed the application to function once again (this may have just been because it caused a recycle to occur).
So before I try switching it again, I'd like to perform a thorough audit of all objects being placed in the session (I'd previously taken a quick look at a couple of controllers, and they seemed fine). If I had full control over the code, this is the kind of place where I'd just comment out the class (to compile and find the various ways of reaching the session class), then comment out the accessors, hit compile, and visit each error. But I can't do that with built in .NET framework types.
So, any smart ideas on how to find each usage?
I decided to try using Reflector. I've analyzed the "Used By" for each of the following:
System.Web.HttpSessionStateBase.set_Item(String, Object) : Void
System.Web.SessionState.HttpSessionState.set_Item(String, Object) : Void
System.Web.SessionState.HttpSessionState.set_Item(Int32, Object) : Void
System.Web.HttpSessionStateBase.set_Item(Int32, Object) : Void
System.Web.HttpSessionStateBase.Add(String, Object) : Void
System.Web.SessionState.HttpSessionState.Add(String, Object) : Void
(and checked that we don't use TempData anywhere). Am I missing any other routes by which items can end up in the Session?
You can get the source for asp.net MVC. to look for use of Session
http://aspnet.codeplex.com/releases/view/58781 for MVC 3
http://aspnet.codeplex.com/releases/view/41742 for MVC 2
http://aspnet.codeplex.com/releases/view/24471 for MVC 1
I've actually found these quite useful to have laying around for when you need to find out why something is doing what it does.
MVC 2 won't run with Session switched off, so it may be using the session in ways not compatible with stateserver.
From the DB errors, sounds like nHibernate maybe doing something. You could get the source for that as well to have a look see, but I'm sure it's use of session will be documented.
Simon
I say do a solution wide search for "Session". Besides this if this is ASP.Net MVC application, then don't forget that TempData is also Session.
My .NET MVC project has reached the stage of testing with multiple users and I am getting seemingly random errors (from any screen in the site) relating to Linq2Sql DataReader issues, such as:
'Invalid attempt to call FieldCount when reader is closed.' and
'ExecuteReader requires an open and available Connection. The connection's current state is closed.'
These errors are also appearing less frequesntly during single user testing if a link is double-clicked or when multiple tabs in the browser, or different browsers are refreshed simultaneously.
I'm wondering if these issues are down to DataContext threading problems.
At the moment I am using the repository approach with seperate repositories for each business process. Each repository class fires up a DataContext instance in its contructor and this is used by most methods in the repository.
However some methods are updating the DataLoadOptions to force eager loading of view data so these methods create their own instance of the DataContext.
Also on some screens there is information from multiple business objects displayed so there may be 2 or 3 repositories involved in a single request. Consequesntly there could be many seperate DataContext instances created per request.
I've tried to enforce an eager loading approach throughout using DataLoadOptions where necessary and applying ToList() on query results to make sure everything is loaded up front (not waiting until the view is rendered) - so each DataContext should only be open for a fairly short period.
As the errors appear to be related to multiple threads reusing the same DataContext(s), I'm thinking of implementing a single DataContext per request along the lines of Rick Strahl's Thread Specific DataContextFactory (http://www.west-wind.com/weblog/posts/246222.aspx) or a simpler “unit of work datastore” approach as in Steve Sanderson's example (http://blog.stevensanderson.com/2007/11/29/linq-to-sql-the-multi-tier-story/).
But then there's the issue of DataLoadOptions to resolve. I could create additional thread-specific DataContexts but that seems to be getting away from my goal of using a single DataContext per request. So I'm looking at either reusing the same DataContext but temporarily changing the LoadOptions for some methods as in Kevin Watkin's example (http://www.mrkwatkins.co.uk/Blog/2010/05/)
or scrapping the standard DataLoadOptions approach in favour of using preload stored procedures to pre-populate EntityRefs as discussed by Roger Jennings in Visual Studio magazine (http://visualstudiomagazine.com/Articles/2007/11/01/Optimize-LINQ-to-SQL-Performance.aspx?Page=3)
So my question is has anyone experienced similar random DataReader issues, how did you resolve them, and am I likely to resolve the problem by implementing those solutions I've linked to?
As always time is tight - so I don't want to spend it implementing a solution if the problem is actually somewhere else. Any help would be greatly appreciated!
PS. I'm afraid this is quite a high-level question and I haven't included any specific code examples as I'm not sure where the problem actually lies.