My use case is:
I have an existing JTA TransactionManager and a Transaction in-flight. I'd like to enlist Neo4j as an XAResource in this Transaction such that it may prepare/commit in proper 2PC.
I'm not seeing a public XAResource implementation in Neo4j; everything seems to be routed through the NioNeoDbPersistenceSource > NeoStoreXaDataSource > NeoStoreXaConnection.NeoStoreXaResource.
Is there a preferred way to enlist Neo4j in JTA Transactions outside those provided by its own TransactionManager? All the test cases I'm finding enlist mock "FakeXAResource"[1]
Appreciated!
S,
ALR
[1] e.g. UseJOTMAsTxManagerIT
OK, I have a solution which I believe is the best that Neo4j can handle, though I'm not thrilled with it. :)
https://gist.github.com/ALRubinger/b584065d0e7da251469a
The idea is this:
1) Implement Neo4j's AbstractTransactionManager
This clunky class is a composite of JTA TransactionManager, Neo4j Lifecycle, and some other methods; it's not completely clear to me what some of these (e.g. "getEventIdentifier()" or "doRecovery()") are supposed to, and the contract feels over-specified. I'm not certain why we'd want lifecycle methods in here for the case where Neo4j is not the authoritative owner of the TransactionManager.
2) Implement Neo4j's TransactionManagerProvider
This will let you create a new instance of your AbstractTransactionManager implementation, but it's bound by the JDK Service SPI, so you must supply a no-arg constructor and find some other intelligent/hacky way of passing contextual information in.
3) Create a META-INF/services/org.neo4j.kernel.impl.transaction.TransactionManagerProvider file, with contents of the FQN of your TransactionManagerProvider impl from step 2)
4) When you create a new GraphDatabaseService, pass in config like:
final GraphDatabaseService graphDatabaseService = new GraphDatabaseFactory().
newEmbeddedDatabaseBuilder(FILE_NAME_STORAGE).setConfig(
GraphDatabaseSettings.tx_manager_impl.name(),
"NAME_OF_YOUR_TXM_PROVIDER")
.newGraphDatabase();
Then you access the TransactionManager using a deprecated API (GraphDatabaseAPI):
// Get at Neo4j's view of the TransactionManager
final TransactionManager tm = ((GraphDatabaseAPI) graphDatabaseService).getDependencyResolver().resolveDependency(TransactionManager.class);
tm.begin();
final Transaction tx = tm.getTransaction();
The real problem I have with this approach is that we have to use the TransactionManager implementation from Neo4j, which is wrapping our real TM. What I want to do is use my TM and enlist Neo4j as an XAResource.
So I still haven't found a way to do that, and judging from the Neo4j test suites I don't think it's possible at the moment with any of their supplied XAResource support.
Absolutely willing and hoping to be corrected! :)
But failing the points I mention above, the attached gist works and shows Neo4j using an external TransactionManager (Narayana, from us at JBoss) as the backing implementation.
S,
ALR
Related
I am trying to implement a solution using SDN which was aimed to create a dynamic cypher where my label vary w.r.t input type(n types) irrespective of properties of Node.
Hoping a solultion similiar to what mentioned on this link would help me.
Is it possible to dynamically construct a neo4j cypher query using the GraphRepository pattern
I found the below information in Release notes.
Deprecation of Neo4jTemplate
It is highly recommended for users starting new SDN projects to use the OGM Session directly. Neo4jTemplate has been kept to give upgrading users a better experience.
The Neo4jTemplate has been slimmed-down significantly for SDN 4. It contains the exact same methods as Session. In fact Neo4jTemplate is just a very thin wrapper with an ability to support SDN Exception Translation. Many of the operations are no longer needed or can be expressed with a straightforward Cypher query.
If you do use Neo4jTemplate, then you should code against its Neo4jOperations interface instead of the template class.
The following table shows the Neo4jTemplate functions that have been retained for version 4 of Spring Data Neo4j. In some cases the method names have changed but the same functionality is offered under the new version.
To achieve the old template.fetch(entity) equivalent behaviour, you should call one of the load methods specifying the fetch depth as a parameter.
It’s also worth noting that exec(GraphCallback) and the create…() methods have been made obsolete by Cypher. Instead, you should now issue a Cypher query to the new execute method to create the nodes or relationships that you need.
Dynamic labels, properties and relationship types are not supported as of this version, server extensions should be considered instead.
from this link https://docs.spring.io/spring-data/neo4j/docs/5.0.0.RELEASE/reference/html/
Could anyone help me in achieving the equivalent solution in
SDN 5.X
Thanks!!!
I took the advice to use the session directly in place of the Neo4jOperations mechanism.
#Autowired
SessionFactory sessionFactory
public void doCustomQuery() {
Session session = sessionFactory.openSession();
Iterable<NodeEntity> nodes = session.query(NodeEntity.class, "MATCH (n) RETURN n", params);
}
In Grails I could create Sql object in 2 ways:
def sql = new Sql(sessionFactory.currentSession.connection())
def sql = new Sql(dataSource)
I have read this thread here on Stackoverflow: Getting the SessionFactory for a particular Datasource in Grails
... and one of the answers was that dataSource "...gobbles up excessive connections better to use sessionFactory.currentSession.connection()"
Is this recommendation correct and what is the difference between those two?
When Inspecting created objects I could see that they are almost the same, just 2 properties were different:
dataSource and useConnection.
In case of dataSource it was dataSource=TransactionAwareDataSourceProxy and useConnection=null, while for sessionFactory it is dataSource=null and useConnection=$Proxy 36.
Why this makes difference with consequence of "gobblin up excessive connections"?
The comment about "gobbling up excessive connections" is based on a few assumptions, which may or may not be true in your case.
The assumption is that Hibernate is going to, or already has, or will, during the request create a connection to the database because of the session in view pattern used by Grails and GORM. In that case you would be using one connection for Hibernate and n-number for your other connections.
If you use a mix of GORM and SQL connections it's safer to get the connection from the sessionFactory.
I seem to recall older versions of Grails use to create the connection even if no GORM methods were executed during the request. I'm not entirely sure that's still the case with more recent versions (2.x+)
In our Grails application, we use one database per tenant. Each database contains data of all domain objects. We switch the databases based on the request context using pattern like DomainClass.getDS().find().
The transactions do not work out of the box since the request does not know what transaction manager to use. The #Transactional and withTransaction do nothing.
I have implemented my own version of withTransaction():
public static def withTransaction(Closure callable) {
new TransactionTemplate(getTM()).execute(callable as TransactionCallback)
}
The getTM() returns transaction manager based of the request context, for example transactionManager_db0.
This seems to work in my sandbox. Will this also work:
For parallel requests?
If the database has several replicas?
For hierarchical transactions?
I believe in TDD, but with the exception of the last bullet, I have hard times to provide tests to verify the code.
Hi WAS developers,
I have problem that a JNDI lookup on a declared persistence-context-ref always returns a new EntityManager instance. I expect that within one JTA transaction the container provides me always the same EntityManager instance. But multiple EntityManagers within one transaction causes lock trouble! Furthermore the JPA usage is not optimized as entities might be loaded several times (for each EntityManager) within one transaction.
I have to use persistence-context-ref together with JNDI lookups as I have some EJB2.1 in place within a EJB3.1 module. Furthermore I want to have the EntityManager container-managed.
To reproduce just declare a persistence-context-ref on a EJB2.1 SessionBean:
<persistence-context-ref>
<persistence-context-ref-name>persistence/MyPersistence</persistence-context-ref-name>
<persistence-unit-name>MyPersistence</persistence-unit-name>
</persistence-context-ref>
Now make twice a JNDI lookup consecutively within an open JTA transaction:
context.lookup("java:comp/env/persistence/MyPersistence")
You will see that two different EntityManager instances are returned.
Is this a defect in WAS?
The EntityManager returned from a persistence-context-ref lookup is actually a proxy to a per-transaction EntityManager (a debugger or print will see it is an instance of a JPATxEntityManager), so even though each lookup returns a unique object, they will all interact with the same underlying EntityManager.
I'm trying to unit test a few .NET classes that (for good design reasons) require DbConnections to do their work. For these tests, I have certain data in memory to give as input to these classes.
That in-memory data could be easily expressed as a DataTable (or a DataSet that contains that DataTable), but if another class were more appropriate I could use it.
If I were somehow magically able to get a DbConnection that represented a connection to the in-memory data, then I could construct my objects, have them execute their queries against the in-memory data, and ensure that their output matched expectations. Is there some way to get a DbConnection to in-memory data? I don't have the freedom to install any additional third-party software to make this happen, and ideally, I don't want to touch the disk during the tests.
Rather than consume a DbConnection can you consume IDbConnection and mock it? We do something similar, pass the mock a DataSet. DataSet.CreateDataReader returns a DataTableReader which inherits from DbDataReader.
We have wrapped DbConnection in our own IDbConnection-like interface to which we've added an ExecuteReader() method which returns a class that implements the same interfaces as DbDataReader. In our mock, ExecuteReader simply returns what DataSet.CreateDataReader serves up.
Sounds kind of roundabout, but it is very convenient to build up a DataSet with possibly many resultsets. We name the DataTables after the stored procs that they represent the results of, and our IDbConnection mock grabs the right Datatable based on the proc the client is calling. DataTable also implements CreateDataReader so we're good to go.
An approach that I've used is to create an in-memory Sqlite database. This may be done simply by pulling in the System.Data.SQLite.Core NuGet package to your unit test project, you don't need to install any software anywhere else.
Although it sounds like a really obvious idea, it wasn't until I was looking at the Dapper unit tests that I thought to use the technique myself! See the "GetSqliteConnection" method in
https://github.com/StackExchange/dapper-dot-net/blob/bffb0972a076734145d92959dabbe48422d12922/Dapper.Tests/Tests.cs
One thing to be aware of is that if you create an in-memory sqlite db and create and populate tables, you need to be careful not to close the connection before performing your test queries because opening a new in-memory connection will get you a connection to a new in-memory database, not the database that you just carefully prepared for your tests! For some of my tests, I use a custom IDbConnection implementation that keeps the connection open to avoid this pitfall - eg.
https://github.com/ProductiveRage/SqlProxyAndReplay/blob/master/Tests/StaysOpenSqliteConnection.cs
TypeMock? (You would need to 'install' it though).
Be careful assuming that Data* can give you proper hooks for testing - its pretty the worst case in general. But you say Good Design Reasons, so I'm sure that's all covered :D