In Grails I could create Sql object in 2 ways:
def sql = new Sql(sessionFactory.currentSession.connection())
def sql = new Sql(dataSource)
I have read this thread here on Stackoverflow: Getting the SessionFactory for a particular Datasource in Grails
... and one of the answers was that dataSource "...gobbles up excessive connections better to use sessionFactory.currentSession.connection()"
Is this recommendation correct and what is the difference between those two?
When Inspecting created objects I could see that they are almost the same, just 2 properties were different:
dataSource and useConnection.
In case of dataSource it was dataSource=TransactionAwareDataSourceProxy and useConnection=null, while for sessionFactory it is dataSource=null and useConnection=$Proxy 36.
Why this makes difference with consequence of "gobblin up excessive connections"?
The comment about "gobbling up excessive connections" is based on a few assumptions, which may or may not be true in your case.
The assumption is that Hibernate is going to, or already has, or will, during the request create a connection to the database because of the session in view pattern used by Grails and GORM. In that case you would be using one connection for Hibernate and n-number for your other connections.
If you use a mix of GORM and SQL connections it's safer to get the connection from the sessionFactory.
I seem to recall older versions of Grails use to create the connection even if no GORM methods were executed during the request. I'm not entirely sure that's still the case with more recent versions (2.x+)
Related
In our Grails application, we use one database per tenant. Each database contains data of all domain objects. We switch the databases based on the request context using pattern like DomainClass.getDS().find().
The transactions do not work out of the box since the request does not know what transaction manager to use. The #Transactional and withTransaction do nothing.
I have implemented my own version of withTransaction():
public static def withTransaction(Closure callable) {
new TransactionTemplate(getTM()).execute(callable as TransactionCallback)
}
The getTM() returns transaction manager based of the request context, for example transactionManager_db0.
This seems to work in my sandbox. Will this also work:
For parallel requests?
If the database has several replicas?
For hierarchical transactions?
I believe in TDD, but with the exception of the last bullet, I have hard times to provide tests to verify the code.
My use case is:
I have an existing JTA TransactionManager and a Transaction in-flight. I'd like to enlist Neo4j as an XAResource in this Transaction such that it may prepare/commit in proper 2PC.
I'm not seeing a public XAResource implementation in Neo4j; everything seems to be routed through the NioNeoDbPersistenceSource > NeoStoreXaDataSource > NeoStoreXaConnection.NeoStoreXaResource.
Is there a preferred way to enlist Neo4j in JTA Transactions outside those provided by its own TransactionManager? All the test cases I'm finding enlist mock "FakeXAResource"[1]
Appreciated!
S,
ALR
[1] e.g. UseJOTMAsTxManagerIT
OK, I have a solution which I believe is the best that Neo4j can handle, though I'm not thrilled with it. :)
https://gist.github.com/ALRubinger/b584065d0e7da251469a
The idea is this:
1) Implement Neo4j's AbstractTransactionManager
This clunky class is a composite of JTA TransactionManager, Neo4j Lifecycle, and some other methods; it's not completely clear to me what some of these (e.g. "getEventIdentifier()" or "doRecovery()") are supposed to, and the contract feels over-specified. I'm not certain why we'd want lifecycle methods in here for the case where Neo4j is not the authoritative owner of the TransactionManager.
2) Implement Neo4j's TransactionManagerProvider
This will let you create a new instance of your AbstractTransactionManager implementation, but it's bound by the JDK Service SPI, so you must supply a no-arg constructor and find some other intelligent/hacky way of passing contextual information in.
3) Create a META-INF/services/org.neo4j.kernel.impl.transaction.TransactionManagerProvider file, with contents of the FQN of your TransactionManagerProvider impl from step 2)
4) When you create a new GraphDatabaseService, pass in config like:
final GraphDatabaseService graphDatabaseService = new GraphDatabaseFactory().
newEmbeddedDatabaseBuilder(FILE_NAME_STORAGE).setConfig(
GraphDatabaseSettings.tx_manager_impl.name(),
"NAME_OF_YOUR_TXM_PROVIDER")
.newGraphDatabase();
Then you access the TransactionManager using a deprecated API (GraphDatabaseAPI):
// Get at Neo4j's view of the TransactionManager
final TransactionManager tm = ((GraphDatabaseAPI) graphDatabaseService).getDependencyResolver().resolveDependency(TransactionManager.class);
tm.begin();
final Transaction tx = tm.getTransaction();
The real problem I have with this approach is that we have to use the TransactionManager implementation from Neo4j, which is wrapping our real TM. What I want to do is use my TM and enlist Neo4j as an XAResource.
So I still haven't found a way to do that, and judging from the Neo4j test suites I don't think it's possible at the moment with any of their supplied XAResource support.
Absolutely willing and hoping to be corrected! :)
But failing the points I mention above, the attached gist works and shows Neo4j using an external TransactionManager (Narayana, from us at JBoss) as the backing implementation.
S,
ALR
Hi WAS developers,
I have problem that a JNDI lookup on a declared persistence-context-ref always returns a new EntityManager instance. I expect that within one JTA transaction the container provides me always the same EntityManager instance. But multiple EntityManagers within one transaction causes lock trouble! Furthermore the JPA usage is not optimized as entities might be loaded several times (for each EntityManager) within one transaction.
I have to use persistence-context-ref together with JNDI lookups as I have some EJB2.1 in place within a EJB3.1 module. Furthermore I want to have the EntityManager container-managed.
To reproduce just declare a persistence-context-ref on a EJB2.1 SessionBean:
<persistence-context-ref>
<persistence-context-ref-name>persistence/MyPersistence</persistence-context-ref-name>
<persistence-unit-name>MyPersistence</persistence-unit-name>
</persistence-context-ref>
Now make twice a JNDI lookup consecutively within an open JTA transaction:
context.lookup("java:comp/env/persistence/MyPersistence")
You will see that two different EntityManager instances are returned.
Is this a defect in WAS?
The EntityManager returned from a persistence-context-ref lookup is actually a proxy to a per-transaction EntityManager (a debugger or print will see it is an instance of a JPATxEntityManager), so even though each lookup returns a unique object, they will all interact with the same underlying EntityManager.
I'm wondering what the most efficient way of updating a single value in a domain class from a row in the database. Lets say the domain class has 20+ fields
def itemId = 1
def item = Item.get(itemId)
itemId.itemUsage += 1
Item.executeUpdate("update Item set itemUsage=(:itemUsage) where id =(:itemId)", [usageCount: itemId.itemUsage, itemId: itemId.id])
vs
def item = Item.get(itemId)
itemId.itemUsage += 1
itemId.save(flush:true)
executeUpdate is more efficient if the size and number of the un-updated fields is large (and this is subjective). It's how I often delete instances too, running 'delete from Foo where id=123' since it seems wasteful to me to load the instance fully just to call delete() on it.
If you have large strings in your domain class and use the get() and save() approach then you serialize all of that data from the database to the web server twice unnecessarily when all you need to change is one field.
The effect on the 2nd-level cache needs to be considered if you're using it (and if you edit instances a lot you probably shouldn't). With executeUpdate it will flush all instances previously loaded with get() but if you update with get + save if flushes just that one instance. This gets worse if you're clustered since after executeUpdate you'd clear all of the various cluster node caches vs flushing the one instance on all nodes.
Your best bet is to benchmark both approaches. If you're not overloading the database then you may be prematurely optimizing and using the standard approach might be best to keep things simple while you solve other problems.
If you use get/save, you'll get the maximum advantage of the hibernate cache. executeUpdate might force more selects and updates.
The way executeUpdate interacts with the hibernate cache makes a difference here. The hibernate cache gets invalidated on executeUpdate. The next access of that Item after the executeUpdate would have to go to the database (and possibly more, I think hibernate might invalidate all Items in the cache).
Your best bet is to turn on debug logging for 'org.hibernate' in your Config.groovy and examine the SQL calls.
I think they are equal. The both issue 2 sql calls.
More efficient would be just a single update
Item.executeUpdate("update Item set itemUsage=itemUsage+1 where id =(:itemId)", [ itemId: itemId.id])
You can use the dynamicUpdate mapping attribute in your Item class:
http://grails.org/doc/latest/ref/Database%20Mapping/dynamicUpdate.html
With this option enabled, your second way to update a single field using Gorm will be as efficient as the first one.
I need to add a JNDI datasource from a legacy database to my Grails (1.2.2) application.
So far, the resource is added to my Tomcat (5.5) and DataSource.groovy contains:
development {
dataSource {
jndiName = "jdbc/lrc_legacy_db"
}
}
I also created some domain objects mapping the different tables to comfortably load and handle data from the DB with GORM. But I now want to assure, that every connection to this DB is really read-only. My biggest concern here is the dbCreate- property and the automatic database manipulation through GORM and the GORM classes.
Is it enough to just skip dbCreate?
How do I assure that the database will only be read and never ever manipulated in any way?
You should use the validate option for dbCreate.
EDIT: The documentation is quite a bit different than when I first posted this answer so the link doesn't quite get you to where the validate option is explained. A quick find will get you to the right spot.
According to the Grails documentation:
If your application needs to read but never modify instances of a persistent class, a read-only cache may be used
A read-only cache for a domain class can be configured by
1. Enable Caching
Add something like the following to DataSource.groovy
hibernate {
cache.use_second_level_cache=true
cache.use_query_cache=true
cache.provider_class='org.hibernate.cache.EhCacheProvider'
}
2. Make Cache Read-Only
For each domain class, you will need to add the following to the mapping closure:
static mapping = {
cache usage:'read-only', include:'non-lazy'
}