WAS 8.0.0.2: JNDI lookup on persistence-context-ref returns always new EntityManager instance - jndi

Hi WAS developers,
I have problem that a JNDI lookup on a declared persistence-context-ref always returns a new EntityManager instance. I expect that within one JTA transaction the container provides me always the same EntityManager instance. But multiple EntityManagers within one transaction causes lock trouble! Furthermore the JPA usage is not optimized as entities might be loaded several times (for each EntityManager) within one transaction.
I have to use persistence-context-ref together with JNDI lookups as I have some EJB2.1 in place within a EJB3.1 module. Furthermore I want to have the EntityManager container-managed.
To reproduce just declare a persistence-context-ref on a EJB2.1 SessionBean:
<persistence-context-ref>
<persistence-context-ref-name>persistence/MyPersistence</persistence-context-ref-name>
<persistence-unit-name>MyPersistence</persistence-unit-name>
</persistence-context-ref>
Now make twice a JNDI lookup consecutively within an open JTA transaction:
context.lookup("java:comp/env/persistence/MyPersistence")
You will see that two different EntityManager instances are returned.
Is this a defect in WAS?

The EntityManager returned from a persistence-context-ref lookup is actually a proxy to a per-transaction EntityManager (a debugger or print will see it is an instance of a JPATxEntityManager), so even though each lookup returns a unique object, they will all interact with the same underlying EntityManager.

Related

Why NHibernate not reflecting in-memory state when updates made with stored procedure?

I have a process whereby I have an NHibernate session which I use to run a query against the database. I then iterate through the collection of results, and for each iteration, using the same NHibernate session, I call a SQL Stored Procedure (using CreateSQLQuery() & ExecuteUpdate()), which ends up performing an update on a field for that entity.
When it has finished iterating over the list (and calling the SP x times), if I check the database directly in SSMS, I can see that the UPDATE for each row has been applied.
However, in my code, if I then immediately run the same initial query again, to retrieve that list of entities, it does not reflect the updates that the SP made for each row - the value is still NULL.
I haven't got any cache behavior specified against the configuration of NHibernate in my application, and have experimented with different SetCacheMode() when calling the query, but nothing seems to make any difference - the values that I can see directly in the DB have been updated, are not being brought back as updated when I re-query (using Session.QueryOver()) the database (using that same session).
By calling CreateSQLQuery (to update database, single row or multiple rows does not matter), actually you are doing DML-style operation which does not update the in-memory state.
Any call to CreateSQLQuery or CreateQuery will not use/reflect tracking. These are considered out-of-the-scope of Unit Of Work.
These operations directly affect the underlying database neglecting any in-memory state.
14.3. DML-style operations
As already discussed, automatic and transparent object/relational mapping is concerned with the management of object state. This implies that the object state is available in memory, hence manipulating (using the SQL Data Manipulation Language (DML) statements: INSERT, UPDATE, DELETE) data directly in the database will not affect in-memory state. However, NHibernate provides methods for bulk SQL-style DML statement execution which are performed through the Hibernate Query Language (HQL). A Linq implementation is available too.
They (may) work on bulk data. They are necessary in some scenarios for performance reasons. With these, tracking does not work; so yes, in-memory state become invalid. You have to use them carefully.
if I then immediately run the same initial query again, to retrieve that list of entities, it does not reflect the updates that the SP made for each row - the value is still NULL.
This is due to first (session) level cache. This is always enabled by default and cannot be disabled with ISession.
When you first load the objects, its a database hit. You get the objects from database - loop through them - execute commands those are out of Unit Of Work (as explained above) - and again execute same query twice to load same objects under same ISession instance. Second call does not hit the database at all.
It just return the instances from memory. As your in-memory instances are not updated at all, you always get original instances.
To get the updated instances, close the first session and reload the instances with new session.
For more details, please refer to: How does Hibernate Query Cache work

Grails using SessionFactory vs. dataSource when creating Sql Object

In Grails I could create Sql object in 2 ways:
def sql = new Sql(sessionFactory.currentSession.connection())
def sql = new Sql(dataSource)
I have read this thread here on Stackoverflow: Getting the SessionFactory for a particular Datasource in Grails
... and one of the answers was that dataSource "...gobbles up excessive connections better to use sessionFactory.currentSession.connection()"
Is this recommendation correct and what is the difference between those two?
When Inspecting created objects I could see that they are almost the same, just 2 properties were different:
dataSource and useConnection.
In case of dataSource it was dataSource=TransactionAwareDataSourceProxy and useConnection=null, while for sessionFactory it is dataSource=null and useConnection=$Proxy 36.
Why this makes difference with consequence of "gobblin up excessive connections"?
The comment about "gobbling up excessive connections" is based on a few assumptions, which may or may not be true in your case.
The assumption is that Hibernate is going to, or already has, or will, during the request create a connection to the database because of the session in view pattern used by Grails and GORM. In that case you would be using one connection for Hibernate and n-number for your other connections.
If you use a mix of GORM and SQL connections it's safer to get the connection from the sessionFactory.
I seem to recall older versions of Grails use to create the connection even if no GORM methods were executed during the request. I'm not entirely sure that's still the case with more recent versions (2.x+)

How to enlist XAResource with existing Transaction?

My use case is:
I have an existing JTA TransactionManager and a Transaction in-flight. I'd like to enlist Neo4j as an XAResource in this Transaction such that it may prepare/commit in proper 2PC.
I'm not seeing a public XAResource implementation in Neo4j; everything seems to be routed through the NioNeoDbPersistenceSource > NeoStoreXaDataSource > NeoStoreXaConnection.NeoStoreXaResource.
Is there a preferred way to enlist Neo4j in JTA Transactions outside those provided by its own TransactionManager? All the test cases I'm finding enlist mock "FakeXAResource"[1]
Appreciated!
S,
ALR
[1] e.g. UseJOTMAsTxManagerIT
OK, I have a solution which I believe is the best that Neo4j can handle, though I'm not thrilled with it. :)
https://gist.github.com/ALRubinger/b584065d0e7da251469a
The idea is this:
1) Implement Neo4j's AbstractTransactionManager
This clunky class is a composite of JTA TransactionManager, Neo4j Lifecycle, and some other methods; it's not completely clear to me what some of these (e.g. "getEventIdentifier()" or "doRecovery()") are supposed to, and the contract feels over-specified. I'm not certain why we'd want lifecycle methods in here for the case where Neo4j is not the authoritative owner of the TransactionManager.
2) Implement Neo4j's TransactionManagerProvider
This will let you create a new instance of your AbstractTransactionManager implementation, but it's bound by the JDK Service SPI, so you must supply a no-arg constructor and find some other intelligent/hacky way of passing contextual information in.
3) Create a META-INF/services/org.neo4j.kernel.impl.transaction.TransactionManagerProvider file, with contents of the FQN of your TransactionManagerProvider impl from step 2)
4) When you create a new GraphDatabaseService, pass in config like:
final GraphDatabaseService graphDatabaseService = new GraphDatabaseFactory().
newEmbeddedDatabaseBuilder(FILE_NAME_STORAGE).setConfig(
GraphDatabaseSettings.tx_manager_impl.name(),
"NAME_OF_YOUR_TXM_PROVIDER")
.newGraphDatabase();
Then you access the TransactionManager using a deprecated API (GraphDatabaseAPI):
// Get at Neo4j's view of the TransactionManager
final TransactionManager tm = ((GraphDatabaseAPI) graphDatabaseService).getDependencyResolver().resolveDependency(TransactionManager.class);
tm.begin();
final Transaction tx = tm.getTransaction();
The real problem I have with this approach is that we have to use the TransactionManager implementation from Neo4j, which is wrapping our real TM. What I want to do is use my TM and enlist Neo4j as an XAResource.
So I still haven't found a way to do that, and judging from the Neo4j test suites I don't think it's possible at the moment with any of their supplied XAResource support.
Absolutely willing and hoping to be corrected! :)
But failing the points I mention above, the attached gist works and shows Neo4j using an external TransactionManager (Narayana, from us at JBoss) as the backing implementation.
S,
ALR

Error when adding to many-to-many table in EF6

I have recently changed from ObjectContext to DbContext using EntityFramwework by upgrading to EF6
Most stuff works, but saving and updating won't. Here is an example:
public void AssignToAdmin(Product product, Admin admin)
{
var pcsps = from s in context.ProductCreationSections
join pcsp in context.ProductCreationSectionAndProducts on s.ProductCreationSecitionID equals pcsp.ProductCreationSectionID
where pcsp.ProductID == product.ProductID && s.IsManagerAssigned
select pcsp;
foreach (var pcsp in pcsps.Include("AssignedAdmins"))
{
pcsp.AssignedAdmins.Add(admin);
}
}
Trying to execute the line pcsp.AssignedAdmins.Add(admin), I get the error:
Error: The relationship between the two objects cannot be defined
because they are attached to different ObjectContext objects.
There is one context for the class and it comes from Dependency Injection (the class is a Service in an MVC app).
I've tried removing/attaching and so on, but this doesn't fix it - it just gives different error messages. It's not even obvious which entity is using another context.
Any ideas of where this other context the error message refers to is coming from?
See The relationship between the two objects cannot be defined because they are attached to different ObjectContext objects
Where has admin come from?
Getting the admin object from the same context as the pcsp object should help.
Sorted it, but it was quite a major refactor.
The issue was that each service received it's own instance of 'context', so when two entities from two services were expected to work together, they wouldn't as each belonged to a different context.
One solution would have been to make the class that created the context a 'Singleton' so that it always returned the same instance, but that would have been very BAD as every page would then use the same context.
Each service got it's own instance of 'context' through Dependency Injection.
I changed the project so only the controllers got an instance of context through DI. Then, in the controller constructor, this context was passed to the services that the controller had received through dependency injection.
This way, each request only ever uses a single instance of context, but this instance is still short lived, as it is supposed to be as each request has its own context and doesn't share one.
Not sure this is the way it is supposed to work, but given the web app I was working with, this seemed to be the best solution.
I also had to go through and add a 'context.SaveChanges();' statement everywhere a change to the database was made.
Not sure why this should be when the old version of EF did this automatically, but it now works.
Thanks for the advice Derrick

Difference between findAll, getAll and list in Grails

With Grails there are several ways to do the same thing.
Finds all of domain class instances:
Book.findAll()
Book.getAll()
Book.list()
Retrieves an instance of the domain class for the specified id:
Book.findById(1)
Book.get(1)
When do you use each one? Are there significant differences in performance?
getAll is an enhanced version of get that takes multiple ids and returns a List of instances. The list size will be the same as the number of provided ids; any misses will result in a null at that slot. See http://grails.org/doc/latest/ref/Domain%20Classes/getAll.html
findAll lets you use HQL queries and supports pagination, but they're not limited to instances of the calling class so I use executeQuery instead. See http://grails.org/doc/latest/ref/Domain%20Classes/findAll.html
list finds all instances and supports pagination. See http://grails.org/doc/latest/ref/Domain%20Classes/list.html
get retrieves a single instance by id. It uses the instance cache, so multiple calls within the same Hibernate session will result in at most one database call (e.g. if the instance is in the 2nd-level cache and you've enabled it).
findById is a dynamic finder, like findByName, findByFoo, etc. As such it does not use the instance cache, but can be cached if you have query caching enabled (typically not a good idea). get should be preferred since its caching is a lot smarter; cached query results (even for a single instance like this) are pessimistically cleared more often than you would expect, but the instance cache doesn't need to be so pessimistic.
The one use case I would have for findById is as a security-related check, combined with another property. For example instead of retrieving a CreditCard instance using CreditCard.get(cardId), I'd find the currently logged-in user and use CreditCard.findByIdAndUser(cardId, user). This assumes that CreditCard has a User user property. That way both properties have to match, and this would block a hacker from accessing the card instance since the card id might match, but the user wouldn't.
Another difference between Domain.findByID(id) and Domain.get(id) is that if you're using a hibernate filter, you need to use Domain.findById(id). Domain.get(id) bypasses the filter.
AFAIK, these are all identical
Book.findAll()
Book.getAll()
Book.list()
These will return the same results
Book.findById(1)
Book.get(1)
but get(id) will use the cache (if enabled), so should be preferred to findById(1)

Resources