JSF 2 Handling Data - jsf-2

I'm developing an app with JSF2 in Tomcat. Still pretty fresh to JSF I have a question which I guess is pretty simple to answer to someone who's got fairly good experience developing web aps and specifically in JSF 2. It's about the way one should store user's data during their interaction with the application. As it stands right now I am saving a lot of data in a session scoped managed bean (like collection of earlier pulled from database entities which themselves are linked to other entities) and whenever a request is made the application will serve anyone of those objects stored in that collection. But I am worried that the session bean is becoming over bloated and don't know how much memory it is safe for a single session bean to consume. What I don't know is whwther when the app goes into production, and a lot of users log, is then the server gonna be ok to handle it. So I guess my question is as follows.
Is there any rule to go buy about storing, handlind and serving large amounts of users data that comes from the database:
Is it ok to do it my way (that is store a lot of stuff in a session scoped bean) so that with each request the app doesn't nee to query and retrieve the data from database. And it that's the case how to best load entities linked to an object stored in a list so they are not all loaded at once but only after the actual object is used to perform some operations before sending the data to the user.
Or should the app keep session beans fairly light (no lists of pre-pulled enities, objects etc) and instead make a trip to the database every time a larger piece of data is required, retrieve it and serve on the fly?
Or perhaps there's an entirely different method, preferred or recommended, to do this.
All suggestions and help are very much appreciated.

You should definitely not do the entity caching job by a JSF session scoped bean. You should delegate the entity caching job to the persistence layer and configure/finetune it over there. JPA and Hibernate for example have pretty good caching support. Here are some articles to read about it:
Javalobby - JPA Caching
EclipseLink Examples - JPA Caching
Hibernate Reference - Chapter 21 - Improving performance
A JSF session scoped bean should merely just contain session scoped data which is used in every request of the webapp throughout the entire browser session. For example, the logged-in user, the user preferences, the user language/locale, etcetera.
The (form) data should just be represented by a JSF request or view scoped bean. Just call the database on every fresh new request or view and do not worry about the costliness of the database trips in your JSF backing bean. Let the persistence layer worry about it.

Related

nHibernate w/ASP.NET Session Per Request And Active Work Item

I am building an ASP.NET application using nhibernate and I implemented the session per request architecture. Each request I am opening a session, using it, then closing it. I am using one large object across several views and I am storing the object in user session cache so it maintains state for them across several different pages. The users do not want to have to save their changes on each page, they want the ability to navigate between several pages making changes, then commit them to the DB. This works well except for when the users try to hit a page that triggers lazy loading on the proxy object (which fails due to the session per request design closing the nhibernate session in the previous request). I know that turning lazy loading off would fix it; however, that is not an option due to the performance issues it would cause in other areas. I have tried changing the session per request design but have had no luck since I do not know when it is "safe" to close the nhibernate session.
Has anyone else done anything similar to this or have any advice?
Thanks in advance!
Keeping any objects in user session - the server session, server resource is not the best approach. Just imagine, that accidently your application will be very very successful... and there will be many users... therefore many sessions == many resources.
I would suggest (based on some experience), try to re-think the architecture to avoid session. Extreme would be to go to single page application... but even some async during "...navigation among several pages..." will be better.
All that will mean, that:
we passs data to client (standard ASP.NET MVC way with rendered views or some Web Api JSON)
if needed we send data back to server (binding of forms or formatting JSON)
In these scenarios, standard NHiberante session will work. Why? Because we "reload and reassign" objects with standard NHibernat infrastructure. That would be the way I suggest to go...
But if you want to follow your way, then definitely check the merge functionality of NHibernate:
9.4.2. Updating detached objects
9.4.3. Reattaching detached objects
19.1.4. Initializing collections and proxies
Some cites:
In an application with a separate business tier, the business logic must "prepare" all collections that will be needed by the web tier before returning. This means that the business tier should load all the data and return all the data already initialized to the presentation/web tier that is required for a particular use case. Usually, the application calls NHibernateUtil.Initialize() for each collection that will be needed in the web tier (this call must occur before the session is closed) or retrieves the collection eagerly using a NHibernate query with a FETCH clause or a FetchMode.Join in ICriteria. This is usually easier if you adopt the Command pattern instead of a Session Facade.
You may also attach a previously loaded object to a new ISession with Merge() or Lock() before accessing uninitialized collections (or other proxies). No, NHibernate does not, and certainly should not do this automatically, since it would introduce ad hoc transaction semantics!

EJB3 / JSF2: Design of JSF2 app with ConversationalScope and Stateful EJB

My use case is as follows:
Managing orders with order lines, a customer and payment details.
The app consists of an order list view from which an order detail view can be opened for editing an existing order or creating an new order. The order detail view uses a view param (existing order id or nothing to indicate a new order to be created).
When the order detail view is opened an OrderControllerBean is starting a ConversationalScope and depending on the availability of the order id loading or creating a new order entity. This bean is a stateful session bean as well meant to be used as a facade. The bean contains methods for handling order lines, the customer and the payment details as well as saving and deleting an order. These methods use injected EJBs which are designed as stateless session beans as some kind of DAOs to handle the JPA entities order, order line, customer and payment detail.
From the order detail view with customer info, payment info and order line list the user can navigate to the order line detail view adding/editing order lines and to the customer and payment detail view in a similar manner. Those detail views all use the same OrderControllerBean. On the customer, order line and payment detail views there are Ok and Cancel buttons which are not transactional.
On the order detail view there is a Save and Cancel button which should persist all modifications which are done during the conversation.
The question i have now is: is this design suitable and ok?
I am not sure about the following issues:
What happens if the user never use Save or Cancel?
Does everything stay around till the conversion or the session times out?
What does this mean from the transaction perspective?
What does this mean for the managed entities?
What happens if the user leaves his worksplace and comes back later continuing work on the conversation? If the conversation is timed out, how can i gracefully handle this issue?
Stateful beans are a pain and a source of problems in my opinion.
It's better if you handle timeouts at the http session level, rather than givin this responsibility to the Application server (specially since the http session timeout is still relevant you just add another timeout)
You can replace the persitent state provided by the stateful bean with some kind of object caching or if you prefer you can add a sessionid to the database and keep track of your objects states there (it can be special tables to hold temporary objects until saved or discarded for example).
All in all, keep things apart, timeout and temporary objects on the web server side, and use the ejbs for persistence (JPA) and as a facade (stateless beans)
Why do you need to evaluate the either/or situation in creating orders? There should simply be a button that says : New Order that launches a popup form and another, possibly from a datatable row that says "View Order Details".
The design is fine. Just a few tweaks.
You'll want to create and maintain a single session scoped object(call it Visit) in which you store all session related material. I'd advise you stick with the JSF session scoped bean that is dependent on the http session and can be effectively managed by the container. JSF Session Scoped beans are stored as simple objects in the http session and so can be easily manipulated outside the context of JSF. CDI Session Scoped beans however are trickier to handle outside of CDI.
I'll also advise you break up your order creation process into a multi-step process using dynamically loaded page fragments <ui:include/> or the fine primefaces wizard component. With a multi-step creation, all you need to do is gather data along the steps and only commit a transaction when you've all the info you need from all the steps in a DTO. Keep the wizard DTO in the single session object so the container can cleanup in case of a timeout. If the user never saves or cancel or walks away from his desk, the session will die a natural death. And he can come back and continue his transaction if he makes it in time. Not sure what carlos has against stateless session beans but in my experience, they're a nice way of exposing business processes, in that they can expose their functionality in other ways like webservices and message targets (JEE5). Above all, it's very good design to keep as much business processing out of the managed beans as possible and into durable constructs like EJBs and Spring beans etc.

Integrating ice/ace:dataTable with JPA and request-scoped beans

I'm wondering what is the right way to deal with dataTables that take input in a Hibernate/JPA world. As far as I can tell, one of the following three choices is causing the whole house of cards to fall apart, but I don't know which one is wrong.
Semi-automatic transaction and EntityManager handling via a custom JSF PhaseListener that begins and commits transactions around every request
Putting editing components inside a dataTable
Using request-scoped managed beans that fetch their data from a request-scoped EntityManager (with some help from PrettyFaces to set IDs on the request scoped beans from their URLs)
Backing a dataTable with a request-scoped bean instead of a view- or session-scoped bean.
I see an ICEfaces dataTable demo using JPA but they are both manually managing the transactions and not displaying editing components by default. You click on the row which causes an object to be nominated for editability and then when you hit "save" it manually reconnects the object to the new EntityManager before manually triggering a save. I see the click-to-edit function here as giving us a way to ensure that the right object gets reattached to the current session, and I don't know how one would live without something similar.
The impression I'm getting about the new ICEfaces 3.0 ace:dataTable (née PrimeFaces 2.0 dataTable) is that it is intended to be used in a View- or Session-scoped bean, but I don't see how one could get around StaleObjectState and/or LazyInitializationExceptions if one has model objects coming out of the DAO in request A and EntityManager A and then being modified or paged in by request B with EntityManager B.
I suppose it might work under Java EE through some kind of deep fu, but I don't have the luxury of upgrading us from Tomcat 6 to anything fancier right now (though it is my intent in the long run). We're also not about to start using Spring or Seam or whatever the other cool stuff is. ICEfaces is enough weird for us, probably too much weird honestly.
So to sum up, which of these is the wrong choice? The request-scoped entity manager, the request-scoped dataTable or using editing components inside a dataTable? Or is something else really at fault here?
If you'd ask me, the prime fault seems to be sticking to an almost bare Tomcat when your requirements seem to scream for something a little fancier. The mantra is normally that you use Tomcat when you don't need "all that that other stuff", so when you do need it, why keep using a bare Tomcat?
That said, the pattern really isn't that difficult.
Have a view scoped backing bean
Obtain the initial data in an #PostConstruct- (when there are no parameters like IDs) or PreRenderViewEvent method in combination with view parameters
Use a separate Service class that uses an entity manager to obtain and save the data
Make the entity manager "transaction scoped"
Without EJB/CDI/Spring:
Obtain a new entity manager from an entity manager factory for every operation.
Start a (resource local) transaction, do the operation, commit transaction, close entity manager.
Return the list of entities directly from your backing bean, bind the edit mode input fields of the table to the corresponding properties of the entity.
When updating a single row, pass the corresponding entity to the update method of your service. Apart from the overhead of getting an entity manager, starting the transaction etc, this basically only calls merge() on the entity manager.
Realize that outside the service you're working with detached entities all the time. There is thus no risk for any LazyInitializationExceptions. The backing beans need to be in view scope so the correct (detached!) entity is updated by JSF, which your own code then passes to the service, which merges it into the persistence context.
The flow for persisting is thus:
View state View scope Transaction scoped PC
Facelet/components Backing Bean Service
Strings ------> Detached entities --> Attached entities
(the flow for obtaining data is exactly the reverse)
Creating the Service this way is a little tedious and a kind of masochist exercise though. For an example app and just the two methods (get and update) discussed above it wouldn't be so bad, but for any sizable app this will quickly run out of hand.
If you are already adding JSF and JPA to Tomcat, just do yourself a favor and use something like TomEE. This is barely bigger than Tomcat (25MB vs 7MB) and contains all the stuff you're supposedly trying to avoid but in reality need anyway.
In case you absolutely can't upgrade your Tomcat installation (e.g. the product owner or manager thinks he owns the server instead of the developers), you might want to invest in learning about CDI. This can be easily added to your war (just one extra jar) and let's you abstract away lots of the tedious code. One thing that you also could really use is a JTA provider. This too can be added separately to your war, but the more of this stuff you add the better you'll be off by just using TomEE (or alternatives like GlassFish, Resin, JBoss, etc).
Also see this article, which covers various parts of your requirements: Communication in JSF 2.0

Relying on nhibernate's second level cache vs pushing objects into asp.net session

I have some big entities which are frequently accessed in the same session. For example, in my application there is a reporting page which consist of dynamically generated chart images. For each chart image on this page, the client makes requests to corresponding controller and the controller generates images using some entities.
I can either use asp.net's session dictionary for "caching" those entities or rely on nhibernate's second level cache support with using cached queries for example.
What is your opinion?
By the way I will use shared hosting, is nhibernate's second level cache hosting friendly?
Thanks.
I think you should use NHibernate's cache. If user makes distinct request to get each entity one by one then you probably should use different NHibernate ISession implementations to get them (bacause of session per web request strategy).
Also when using Nhibernate cache you won't have troubles with concurrency issues - it will handle them for you.
Be aware of caching an entity (from the session you loaded it from) in a static variable, that is then accessed by another session (eg, pulled from the cache system you made).
Entity instances are tied to a Session remember, so you shouldn't mix and match instances across Session boundaries.
I've solved this before by creating a light version of the class (that is not NH Session aware) and caching that basic class instead.
Alternatively, use the 2nd level cache, which doesn't have this problem.

ASP.NET MVC saving Entity session

I have been working with entity framework and ASP MVC for a while. I have stored the entity object in the HttpContext.Current.Session in order to use the same session at all times. Now I have encountered some problems and I am wondering if this may have been a bad idea and if so, how should I do it otherwise.
The problem I have now is that the entity object caches data and that one user cannot see changes that the other user has done.
The session is basically a hash table-like structure that is specific to a user. If you want to store data that can be seen by all users of the system, you either want to use the Application scope or caching.
This article from MS covers the different options for state management, including session and application:
http://msdn.microsoft.com/en-us/library/75x4ha6s.aspx
Caching is slightly different in that it allows you to do things like set expiration. If you don't need this kind of functionality, I would recommend sticking with application state. Article on caching from MS:
http://msdn.microsoft.com/en-us/library/6hbbsfk6(VS.71).aspx
The problem with storing Entities in memory between requests / sessions or whatever is that you have to be very careful if you have a new ObjectContext for each request / session whatever, because and entity can only be attached to one ObjectContext at a time, and can be easy to forget to detach (between requests in the same session) or to correctly share an object (between concurrent requests in different sessions).
Check out this Tip for clues on how to cache data between requests / users / sessions etc.
Hope this helps
Alex
By the book you would want your object context exist the shortest time possible.
so you create a context, grab some data, return your view to client.
and dispose everything.
Start caching when your db can't handle your load.

Resources