Neo4j with spatial: NotFoundException: More than one relationship - neo4j

What is the cause and how to fix this exception:
org.neo4j.graphdb.NotFoundException: More than one relationship[RTREE_CHILD, INCOMING] found for NodeImpl#105
at org.neo4j.kernel.impl.core.NodeImpl.getSingleRelationship(NodeImpl.java:344)
at org.neo4j.kernel.impl.core.NodeProxy.getSingleRelationship(NodeProxy.java:191)
at org.neo4j.collections.rtree.RTreeIndex.getIndexNodeParent(RTreeIndex.java:768)
at org.neo4j.collections.rtree.RTreeIndex.adjustPathBoundingBox(RTreeIndex.java:672)
at org.neo4j.collections.rtree.RTreeIndex.add(RTreeIndex.java:90)
at org.neo4j.gis.spatial.EditableLayerImpl.add(EditableLayerImpl.java:44)
at org.neo4j.gis.spatial.ShapefileImporter.importFile(ShapefileImporter.java:209)
at org.neo4j.gis.spatial.ShapefileImporter.importFile(ShapefileImporter.java:122)
I am using 2.0.0 and spatial jars coming from compiled github project.
The exception is thrown when I try to import Shapefile (this is code in unmanaged extension):
GraphDatabaseService spatialDb = new GraphDatabaseFactory().newEmbeddedDatabase("/home/db/data/spatial.db");
Transaction tx = spatialDb.beginTx();
try {
ShapefileImporter importer = new ShapefileImporter(spatialDb, new NullListener());
importer.importFile("/home/bla/realshp/users_location.shp", "users_location");
tx.success();
} catch (Exception e) {
e.printStackTrace();
} finally {
tx.close();
return Response.status(200).entity("Done. ").build();
}
The shape file is generated from CSV file with ogr2ogr - it seems legit and is read without exceptions. In the original file there was around 30000 points defined as follows (ogr2ogr will pull longitude and latitude):
id,longitude,latitude,gender,updated
3,-122.1171925,37.4343361,1,2013-11-20 05:03:22
304,-122.0919000,37.3094000,1,2013-11-03 00:42:01
311,-122.0919000,37.3094000,1,2013-11-03 00:42:01
How to get around it? I need to load milions of points to the db.
Side question: now I create new graph-spatial datastore - is it correct? Maybe I should load it to existing graph db?
UPDATE:
I tried to input coordinates "manually" using methods from TestSimplePointLayer. I got the same exception around 450th coordinate. Bunch of them are the same as you can see in the sample, but they are valid points. How to get around it?

You are skipping a step here. You create a spatial index and then you add the users to the index.
So for example if you had a shape file of all the states or counties or zip codes in the US, you can create a spatial layer with those shapes and add the users to them.
You can use a simple point layer as well if you want, but they have to be unique, but the user nodes that reside in those locations don't have to be. See http://java.dzone.com/articles/running-along-graph-using-0 and http://www.markhneedham.com/blog/2013/03/10/neo4jcypher-finding-football-stadiums-near-a-city-using-spatial/ for a better idea.

I meet the same error when I try to add node with same lon/lat (0,0) to the layer.
When more than 100 RTREE_CHILD ref node inserted, this exception appears. It's a bug of the source code.
src/main/java/org/neo4j/gis/spatial/rtree/RTreeIndex.java
try this forked plugin:
https://github.com/linkedin-inc/spatial

Related

OWLKnowledgeExplorerReasoner - getObjectLabel always ends in error Unreachable situation

I am trying to access information about completion graph, but everytime it ends with error uk.ac.manchester.cs.jfact.helpers.UnreachableSituationException: Unreachable situation! when I call getObjectLabel(rootNode, false/true). I was trying it on every class expression from the ontology but always ended up with the error message.
Set<OWLClassExpression> types = classSet2classExpSet(hybridSolver.ontology.classesInSignature().collect(toSet()));
for (OWLClassExpression e : types) {
OWLKnowledgeExplorerReasoner.RootNode rootNode = loader.getReasoner().getRoot(e);
System.out.println(loader.getReasoner().getObjectLabel(rootNode, false)); //problem UnreachableSituation !!
Node<OWLObjectProperty> propertyNode = (Node<OWLObjectProperty>) loader.getReasoner().getObjectNeighbours(rootNode, false);
for (OWLObjectProperty p : propertyNode.getEntities()) {
Collection<OWLKnowledgeExplorerReasoner.RootNode> rootNodes = loader.getReasoner().getObjectNeighbours(rootNode, p);
...
}
}
Other method getObjectNeighbours(rootNote, false) works fine.
Can somebody help? Is there any way to access completion graph with OWLAPI? Why it might end with this error?
The labels found for the nodes in question are not named class expressions (e.g., they are AND nodes. These cannot be translated back to OWLClass and there's no current implementation for translating back class expressions.
Tweaking the code to remove the exceptions is doable but for your ontology example you'd always get back empty nodes, which isn't very informative.
I have removed the exception throwing in the latest version 5 branch, however I doubt this is sufficient for your needs.

Jena Fuseki: Adding temporary triples to execution context during SPARQL-query with property functions

I've been following the ARQ guide on Property Functions. The section on Graph Operations concludes with "New Triples or Graphs can therefore be created as part of the Property Function" and I've been hoping to use this as a means to add triples to the current query execution context (and not to persist), accessible for the remaining query.
I've been trying the code snippets in that section of the guide:
DatasetGraph datasetGraph = execCxt.getDataset();
Node otherGraphNode = NodeFactory.createURI("http://example.org/otherGraph");
Graph newGraph = new SimpleGraphMaker().createGraph();
Triple triple = ...
newGraph.add(triple);
datasetGraph.addGraph(otherGraphNode, newGraph);
but I'm running into issues, seemingly with the read-lock.
org.apache.jena.dboe.transaction.txn.TransactionException: Can't become a write transaction
at org.apache.jena.dboe.transaction.txn.Transaction.ensureWriteTxn(Transaction.java:251) ~[fuseki-server.jar:4.2.0]
at org.apache.jena.tdb2.store.StorageTDB.ensureWriteTxn(StorageTDB.java:200) ~[fuseki-server.jar:4.2.0]
at org.apache.jena.tdb2.store.StorageTDB.add(StorageTDB.java:81) ~[fuseki-server.jar:4.2.0]
at org.apache.jena.dboe.storage.system.DatasetGraphStorage.add(DatasetGraphStorage.java:181) ~[fuseki-server.jar:4.2.0]
at org.apache.jena.dboe.storage.system.DatasetGraphStorage.lambda$addGraph$1(DatasetGraphStorage.java:194) ~[fuseki-server.jar:4.2.0]
Is there any way to add triples to the execution context during a SPARQL query?
Yes, and SPARQL Anything uses that capability to triplify non-RDF data at query time and makes it available in the execution context's DatasetGraph.
Here is an example of that being done:
if (this.execCxt.getDataset().isEmpty()) {
// we only need to call getDatasetGraph() if we have an empty one
// otherwise we could triplify the same data multiple times
dg = getDatasetGraph(p, opBGP);
} else {
dg = this.execCxt.getDataset();
}
These lines
This answer might not address your particular need (adding individual triples) but hopefully some of code in the project could serve as an example of what you are looking for.

Jena read hook not invoked upon duplicate import read

My problem will probably be explained better with code.
Consider the snippet below:
// First read
OntModel m1 = ModelFactory.createOntologyModel();
RDFDataMgr.read(m1,uri0);
m1.loadImports();
// Second read (from the same URI)
OntModel m2 = ModelFactory.createOntologyModel();
RDFDataMgr.read(m2,uri0);
m2.loadImports();
where uri0 points to a valid RDF file describing an ontology model with n imports.
and the following custom ReadHook (which has been set in advance):
#Override
public String beforeRead(Model model, String source, OntDocumentManager odm) {
System.out.println("BEFORE READ CALLED: " + source);
}
Global FileManager and OntDocumentManager are used with the following settings:
processImports = true;
caching = true;
If I run the snippet above, the model will be read from uri0 and beforeRead will be invoked exactly n times (once for each import).
However, in the second read, beforeRead won't be invoked even once.
How, and what should I reset in order for Jena to invoke beforeRead in the second read as well?
What I have tried so far:
At first I thought it was due to caching being on, but turning it off or clearing it between the first and second read didn't do anything.
I have also tried removing all ignoredImport records from m1. Nothing changed.
Finally got to solve this. The problem was in ModelFactory.createOntologyModel(). Ultimately, this gets translated to ModelFactory.createOntologyModel(OntModelSpec.OWL_MEM_RDFS_INF,null).
All ontology models created with the static OntModelSpec.OWL_MEM_RDFS_INF will have their ImportsModelMaker and some of its other objects shared, which results in a shared state. Apparently, this state has blocked the reading hook to be invoked twice for the same imports.
This can be prevented by creating a custom, independent and non-static OntModelSpec instance and using it when creating an OntModel, for example:
new OntModelSpec( ModelFactory.createMemModelMaker(), new OntDocumentManager(), RDFSRuleReasonerFactory.theInstance(), ProfileRegistry.OWL_LANG );

Not able to access iterator() from RestTraverser, it gives exception java.lang.IllegalAccessError

I am implementing traversal framework using neo4j java-rest-binding project.
Code is as follows:
RestAPI db = new RestAPIFacade("http://localhost:7474/db/data");
RestNode n21 = db.getNodeById(21);
Map<String,Object> traversalDesc = new HashMap<String, Object>();
traversalDesc.put("order", "breadth_first");
traversalDesc.put("uniqueness", "node_global");
traversalDesc.put("uniqueness", "relationship_global");
traversalDesc.put("returnType", "fullpath");
traversalDesc.put("max_depth", 2);
RestTraverser traverser = db.traverse(n21, traversalDesc);
Iterable<Node> nodes = traverser.nodes();
System.out.println("All Nodes:"); // First Task
for(Node n:nodes){
System.out.println(n.getId());
}
Iterable<Relationship> rels = traverser.relationships();
System.out.println("All Relations:"); // Second Task
for(Relationship r:rels){
System.out.println(r.getId());
}
Iterator<Path> paths = traverser.iterator(); // Third Task
while(paths.hasNext()){
System.out.println(paths.next());
}
I need to do 3 tasks as commented in the code:
Print all the node IDs related to node no. 21
Print all the relation IDs related to node no. 21
Traverse all the paths related to node no. 21
Tasks 1 & 2 are working fine.
But when I try to do traverser.iterator() in 3rd task it throws an Exception saying:
java.lang.IllegalAccessError: tried to access class org.neo4j.helpers.collection.WrappingResourceIterator from class org.neo4j.rest.graphdb.traversal.RestTraverser
Can anyone please check why this is happening or if I am doing wrong then what is the right method to do it.
Thanks in Advance.
I don't believe using the Neo4j Traversal Framework via the REST DB binding is properly supported, nor is it advisable. If you traverse via REST, each node and each relationship will be retrieved over the network as the traversal proceeds, incurring a tremendous overhead for the traversal.
Edit: The above is not true, the REST traverser is smarter than I thought.
In general, it will be faster to use Cypher, and access the Neo4j Server using JDBC. Read more about JDBC here: https://github.com/neo4j-contrib/neo4j-jdbc
If you really want to use the Traversal Framework, you should use Server Extensions, which allow you to design a traversal to run on the server itself, and then only move the result of the traversal over the network. Read more about server extensions here: http://docs.neo4j.org/chunked/stable/server-unmanaged-extensions.html

neo4j reference node obsolete but yet still returned from getAllNodes

According to Neo4j documentation the "reference node concept is obsolete - indexes are the canonical way of getting hold of entry points in the graph.".
However if I use GlobalGraphOperations.getAllNodes() I'm still returned a node with id 0 which I didn't create and which has all the looks of a reference node.
I'm trying to implement a method getNode(String uuid)
public Node getNode(String uuid)
{
GlobalGraphOperations globalGraphOperations = GlobalGraphOperations.at(graphDb);
for(Node tmpNode : globalGraphOperations.getAllNodes())
{
if(tmpNode.equals(graphDb.getReferenceNode()))
{ continue;}
String tmpNodeUuid = (String)tmpNode.getProperty("uuid");
if (tmpNodeUuid.equals(uuid))
{
return tmpNode;
}
}
return null;
}
why does getAllNodes return a reference node?
how to implement programmatically getNode() without using deprecated function getReferenceNode()?
The reference node concept is indeed deprecated and will be removed with Neo4j version 2.0. In 1.x the concept still exists and the reference node is created when the database is intially created. If you don't need it, you can just delete the reference node. The method you're writing is gonna get slow as the graph grows as the entire graph is traversed. You should create an index for the UUID property and use that to look up nodes in the graph, which is much faster. As well as being the 'canonical way of getting hold of entry points in the graph' :-)

Resources