I'm trying to develop a J2EE application (in WildFly 8.2) that uses Neo4J 2.2.1 embedded .
Since i'm migrating it from Neo4J 1.9, the application uses the indexing legacy system.
I'm experiencing problems during some operations that shouldn't returns exceptions related to a transaction.
For example:
for ( Iterator iterator = node.getPropertyKeys().iterator(); iterator.hasNext(); ) {
key = (String)(iterator.next());
....
}
The stacktrace:
File (71):ThreadToStatementContextBridge.java - org.neo4j.kernel.impl.core.ThreadToStatementContextBridge.assertInUnterminatedTransaction
File (104):ThreadToStatementContextBridge.java - org.neo4j.kernel.impl.core.ThreadToStatementContextBridge.getTopLevelTransactionBoundToThisThread
File (111):ThreadToStatementContextBridge.java - org.neo4j.kernel.impl.core.ThreadToStatementContextBridge.getKernelTransactionBoundToThisThread
File (64):ThreadToStatementContextBridge.java - org.neo4j.kernel.impl.core.ThreadToStatementContextBridge.instance
File (785):InternalAbstractGraphDatabase.java - org.neo4j.kernel.InternalAbstractGraphDatabase$8.statement
File (358):NodeProxy.java - org.neo4j.kernel.impl.core.NodeProxy.getPropertyKeys
Note that i got the same error if i call the following:
Index<Node> indexNode = ...
...
indexNode.get("users", "test#test.com").getSingle()
where indexNode is a previous created index in a transaction.
Any ideas?
Thank you
The most impactful breaking changes from Neo4j 1.9 -> 2.0 was mandatory read transactions. Whenever you do read operations you need to have a transaction around that.
try (Transaction tx=graphDatabaseService.beginTx()) {
// read stuff - either graph or index operations
...
tx.success(); // make sure to have this, otherwise trouble might happen in case of nested transactions
}
Related
What is proper way to get DB connection in Grails 3?
For grails 2 following code has works:
((SessionImpl) sessionFactory.getCurrentSession()).connection() // sessionFactory initialized in bootstrap
But after migration to Grails 3 sometimes I see exceptions in the log:
java.sql.SQLException: Operation not allowed after ResultSet closed at
com.mysql.jdbc.SQLError.createSQLException(SQLError.java:957) at
com.mysql.jdbc.SQLError.createSQLException(SQLError.java:896) at
com.mysql.jdbc.SQLError.createSQLException(SQLError.java:885) at
com.mysql.jdbc.SQLError.createSQLException(SQLError.java:860) at
com.mysql.jdbc.ResultSetImpl.checkClosed(ResultSetImpl.java:743) at
com.mysql.jdbc.ResultSetImpl.findColumn(ResultSetImpl.java:1037) at
com.mysql.jdbc.ResultSetImpl.getLong(ResultSetImpl.java:2757) at
com.mchange.v2.c3p0.impl.NewProxyResultSet.getLong(NewProxyResultSet.java:424)
at java_sql_ResultSet$getLong$3.call(Unknown Source)
It happens for 0,01% of requests
Grails 3.2.11
Gorm 6.0.12
I guess it depends on where you need it, but you can inject a DataSource into a service.
javax.sql.DataSource dataSource
Then you can just use
dataSource.getConnection()
Also be aware of the changes to flush mode in GORM 6 (http://gorm.grails.org/6.0.x/hibernate/manual/ section 1.2.1). If an upstream save/commit is failing, your result set could be incidentally closed and trigger an error that looks like this while not really have anything to do with this particular line of code at all. I'd (very temporarily) set back to the old flush mode and see if the problem goes away, before tracking much more down!
From grails docs, you can get the actual dataSource bean. From that you can access the connection or use it to query your db
import groovy.sql.Sql
def dataSource
println "connection: ${dataSource.connection}"
Sql sql = new Sql(dataSource)
sql.eachRow("SELECT * FROM note") { row ->
println "row: ${row}"
}
Use 'dataSourceUnproxied' to avoid Hibernate transaction and session issues:
def dataSourceUnproxied
For executing queries inside current hibernate transactions following construction can be used:
sessionFactory.currentSession.doWork {connection ->
new Sql(connection).execute(query, params)
}
Does this mean we can not call some thing like this via Java API?
I get error - "Caused by: org.neo4j.graphdb.QueryExecutionException: Cannot perform schema updates in a transaction that has performed data updates."
This happens when I call schema update from a procedure call via neo4j console.
try (Transaction tx = db.beginTx()) {
String query = "CREATE INDEX ON :" + lbl + "(" + name + ")";
db.execute(query);
tx.success();
}
The Cypher query calling the procedure is already executed in a transaction, and there are no nested transactions in Neo4j: when you call db.beginTx(), you're getting the existing transaction, and it's not actually necessary unless you need the Transaction object (e.g. to create locks).
Anyway, even though it's not explicitly documented, it's apparently not possible to manipulate the schema from Neo4j procedures. You could say that it fails the use case of
To provide access to functionality that is not available in Cypher, such as manual indexes and schema introspection.
I created a test procedure similar to yours:
public class IndexProcedure {
#Context
public GraphDatabaseService db;
#Procedure
#PerformsWrites
public void index(#Name("label") String label, #Name("property") String property) {
db.schema().indexFor(Label.label(label)).on(property).create();
}
}
and ran it from the shell in the simplest Cypher query:
CALL my.package.index('Node', 'name');
Without the #PerformsWrite annotation, I get the following (expected) exception:
WARNING: Failed to invoke procedure my.package.index: Caused by: org.neo4j.graphdb.security.AuthorizationViolationException: Schema operations are not allowed for READ transactions.
With the annotation, I get the same exception as you:
WARNING: Failed to invoke procedure my.package.index: Caused by: org.neo4j.graphdb.QueryExecutionException: Cannot perform schema updates in a transaction that has performed data updates.
I guess the rationale is that setting up the schema is mostly a one-time operation that doesn't really need a procedure: if you're going to execute some Cypher query to call the procedure, you might as well run the script which creates the constraints and indices.
There could also be technical constraints: index creation is asynchronous and probably doesn't participate in the transaction (can you rollback the creation of an index?).
Or maybe it's just a bug? We should get someone from Neo to confirm.
Update: it will supposedly be fixed in Neo4j 3.1 when it's released, per a discussion on SlackHQ.
What is the correlation between Spring org.springframework.transaction.annotation.Transactional annotation and Neo4j OGM org.neo4j.ogm.session.Session.getTransaction() method.
I'm trying to access the current transaction via session.getTransaction() inside of the method annotated with Spring #Transactional but always getting null.
I have added a following code inside of my Spring MVC RestController method:
Transaction tx = session.beginTransaction();
try {
for (int i = 0; i < 10; i++) {
initializeNode(node);
}
}
tx.commit();
} catch (Throwable th) {
logger.error("Error while inserting mock data", th);
th.printStackTrace();
} finally {
tx.close();
}
in case of the following method:
private void initializeNode(TestNode node) {
System.out.println(session.getTransaction());
}
it prints current tx - so far everything is okay.
But in case of the following method:
private void initializeNode(TestNode node) {
System.out.println(session.getTransaction());
User admin = userDao.findByUsername("admin");
}
first time it prints current tx and then null... transaction disappear before commit for a some reason..
this is findByUsername method:
#Service
#Transactional
public class UserDaoImpl implements UserDao {
#Override
#Transactional(readOnly = true)
public User findByUsername(String username) {
return userRepository.findByUsername(username);
}
...
}
Right after that on commit I'm getting a following exception:
org.neo4j.ogm.exception.TransactionManagerException: Transaction is not current for this thread
at org.neo4j.ogm.session.transaction.DefaultTransactionManager.commit(DefaultTransactionManager.java:100)
at org.neo4j.ogm.transaction.AbstractTransaction.commit(AbstractTransaction.java:83)
at org.neo4j.ogm.drivers.embedded.transaction.EmbeddedTransaction.commit(EmbeddedTransaction.java:77)
What am I doing wrong ? Why transaction disappears ?
There are several issues and themes going on in this question. I will try and break them down and hopefully at the end it will all make sense.
As of the latest release of Spring Data Neo4j (4.1.x) there is no correlation between Spring's #Transactional and the Neo4j OGM's Session.getTransaction() or Session.beginTransaction() when called directly.
In your first two code blocks you are completely managing your OGM session lifecycle directly. Spring is not involved at all at this point and as you say it executes as expected.
In your updated third code block you are now expecting the session that you have manually opened to work with your Spring managed DAO. What will happen here is depends on the Neo4j Driver you are using with SDN but essentially because your DAO has the #Transactional annotation, Spring will intercept the call and start a brand new transaction all on its own on top of the one you are manually managing. At this point, we can't make any guarantees about the behaviour but the easiest explanation would be to say that it will be unexpected (again, depending on the driver used).
So how can you fix this?
I'm going to assume you want to use Spring Transactions and Spring Data Neo4j. If that's the case you will want to start by:
Changing your DAO to use Spring Data Repositories. This gives you a lot of free persistence functionality like finders, saves, deletes etc.
Putting the #Transactional annotation around the unit of work you want to accomplish. You might have a method that calls userRepository.findByUserName(), modifies that user and calls userRepository.save(user). In a web environment this is typically some sort of service method.
Removing any code that manually starts or ends an OGM session transaction.
You can find a very short code sample here and a longer code sample here.
A more comprehensive guide can also be found here.
In Spring Data Neo4j 4.2.x we hope to introduce some more powerful and friendlier #Transactional behaviour so keep posted for that update.
I am implementing traversal framework using neo4j java-rest-binding project.
Code is as follows:
RestAPI db = new RestAPIFacade("http://localhost:7474/db/data");
RestNode n21 = db.getNodeById(21);
Map<String,Object> traversalDesc = new HashMap<String, Object>();
traversalDesc.put("order", "breadth_first");
traversalDesc.put("uniqueness", "node_global");
traversalDesc.put("uniqueness", "relationship_global");
traversalDesc.put("returnType", "fullpath");
traversalDesc.put("max_depth", 2);
RestTraverser traverser = db.traverse(n21, traversalDesc);
Iterable<Node> nodes = traverser.nodes();
System.out.println("All Nodes:"); // First Task
for(Node n:nodes){
System.out.println(n.getId());
}
Iterable<Relationship> rels = traverser.relationships();
System.out.println("All Relations:"); // Second Task
for(Relationship r:rels){
System.out.println(r.getId());
}
Iterator<Path> paths = traverser.iterator(); // Third Task
while(paths.hasNext()){
System.out.println(paths.next());
}
I need to do 3 tasks as commented in the code:
Print all the node IDs related to node no. 21
Print all the relation IDs related to node no. 21
Traverse all the paths related to node no. 21
Tasks 1 & 2 are working fine.
But when I try to do traverser.iterator() in 3rd task it throws an Exception saying:
java.lang.IllegalAccessError: tried to access class org.neo4j.helpers.collection.WrappingResourceIterator from class org.neo4j.rest.graphdb.traversal.RestTraverser
Can anyone please check why this is happening or if I am doing wrong then what is the right method to do it.
Thanks in Advance.
I don't believe using the Neo4j Traversal Framework via the REST DB binding is properly supported, nor is it advisable. If you traverse via REST, each node and each relationship will be retrieved over the network as the traversal proceeds, incurring a tremendous overhead for the traversal.
Edit: The above is not true, the REST traverser is smarter than I thought.
In general, it will be faster to use Cypher, and access the Neo4j Server using JDBC. Read more about JDBC here: https://github.com/neo4j-contrib/neo4j-jdbc
If you really want to use the Traversal Framework, you should use Server Extensions, which allow you to design a traversal to run on the server itself, and then only move the result of the traversal over the network. Read more about server extensions here: http://docs.neo4j.org/chunked/stable/server-unmanaged-extensions.html
According to documents there are four transaction isolation levels in Firebird. However, as far as I know, there's no explicit isolation level selection in uib library (TUIBTransaction), but bunch of options for transactions. How I should use those? Is there documentation somewhere?
These bunch of options are what will change the isolation level. As #Arioch said in his compact comment, you can change the isolation level changing the property Options that is of type TTransParams. This is a set of TTransParam as below.
// Transaction parameters
TTransParam = (
{ prevents a transaction from accessing tables if they are written to by
other transactions.}
tpConsistency,
{ allows concurrent transactions to read and write shared data. }
tpConcurrency,
{ Concurrent, shared access of a specified table among all transactions. }
{$IFNDEF FB_21UP}
tpShared,
{ Concurrent, restricted access of a specified table. }
tpProtected,
tpExclusive,
{$ENDIF}
{ Specifies that the transaction is to wait until the conflicting resource
is released before retrying an operation [Default]. }
tpWait,
{ Specifies that the transaction is not to wait for the resource to be
released, but instead, should return an update conflict error immediately. }
tpNowait,
{ Read-only access mode that allows a transaction only to select data from tables. }
tpRead,
{ Read-write access mode of that allows a transaction to select, insert,
update, and delete table data [Default]. }
tpWrite,
{ Read-only access of a specified table. Use in conjunction with tpShared,
tpProtected, and tpExclusive to establish the lock option. }
tpLockRead,
{ Read-write access of a specified table. Use in conjunction with tpShared,
tpProtected, and tpExclusive to establish the lock option [Default]. }
tpLockWrite,
tpVerbTime,
tpCommitTime,
tpIgnoreLimbo,
{ Unlike a concurrency transaction, a read committed transaction sees changes
made and committed by transactions that were active after this transaction started. }
tpReadCommitted,
tpAutoCommit,
{ Enables an tpReadCommitted transaction to read only the latest committed
version of a record. }
tpRecVersion,
tpNoRecVersion,
tpRestartRequests,
tpNoAutoUndo
{$IFDEF FB20_UP}
,tpLockTimeout
{$ENDIF}
);
Since Interbase 6.0 code "opensourced", the documentation for the API hasn't changed much. So if you want an explanation about any of them the docs you are looking are in Interbase manuals.
You can get them here https://www.firebirdsql.org/en/reference-manuals/
Below I'm quoting Ann Harrison in this link to an quick explanation on the usual options used:
isc_tpb_consistency can cause performance problems due the fact that it's locking tables and possibly excluding concurrent access.
isc_tpb_concurrency is the design center for Firebird. Readers don't
block writers, writers don't block readers, and both get a consistent
view of the database.
isc_tpb_read_committed + isc_tpb_rec_version + isc_tbp_read_only give
inconsistent results and occasionally produces an error on a blob
read*, but unlike other modes, it does not block garbage collection so
it's a good mode for long running read transactions that don't have to
get the "right" answer.
isc_tpb_read_committeed + isc_tpb_rec_version has the same performance
as isc_tpb_concurrency, but gets inconsistent results - the same query
run twice in the same transaction may return different rows.
isc_tpb_read_committed + isc_tpb_no_rec_version + isc_tpb_wait is
slower than other modes because it will wait for a change to be
commited rather than reading the newest committed version. Like all
variants of isc_tpb_read_committed, it does not produce consistent
results.
isc_tpb_read_committed + isc_tpb_no_rec_version + isc_tpb_no_wait
gives lots and lots of deadlock errors because every time a reader
encounters a record that's being changed, it returns an error.
NOTE: I hope that you can see that, beside the parameters are not named equally, it's not that hard to understand if you remove the "isc_tpb_" part.