I am trying to use the export-graphml function in Neo4j 2.2. I have downloaded neo4j shell tools and extract it into the lib directory. I am able to export the entire database as a graphml file. However, if I try to export a subset using a query, I receive the following error:
Error occurred in server thread; nested exception is:
java.lang.NoSuchMethodError: org.neo4j.cypher.export.CypherResultSubGraph.from(Lorg/neo4j/cypher/javacompat/ExecutionResult;Lorg/neo4j/graphdb/GraphDatabaseService;Z)Lorg/neo4j/cypher/export/SubGraph;
The statement I used is:
export-graphml -o /path/to/file/out.graphml match (n:Person)-[r:RELATIONSHIP]-() WHERE n.id = 12345 return n, r
I have tried different variations with the different options (-r, -t) and none work
Related
I know there are hundreds of questions out there (and I have been looking prior to asking this!!) because whatever I try, Neo4j will not run the APOC export function.
CALL apoc.export.graphml.all('/tmp/complete-graph.graphml', {useTypes:true, storeNodeIds:false})
Failed to invoke procedure apoc.export.graphml.all: Caused by: java.lang.RuntimeException: Export to files not enabled, please set apoc.export.file.enabled=true in your neo4j.conf
Here is the bottom of my neo4j.conf file:
dbms.security.procedures.unrestricted=apoc.*
dbms.directories.plugins=/var/lib/neo4j/plugins`
apoc.export.file.enabled=true
apoc.import.file.enabled=true
Here is the contents of /var/lib/neo4j/plugins:
-rwxr-xr-x 1 neo4j adm 15949360 Apr 16 2020 apoc-3.5.0.11-all.jar
I am running v 3.5 on Ubuntu 18.
So it turns out the plugins was duplicated and also I needed the whitelist:
dbms.security.procedures.unrestricted=apoc.*
dbms.security.procedures.whitelist=*
apoc.import.file.enabled=true
apoc.export.file.enabled=true
Along with this, I set dbms.directories.import=/ which then allowed me to export along with creating the graph.xml file and setting permissions on it before neo4j could write to it.
If in doubt - pray about it! God always answers.
This still didn't fix it.
Using Neo4j with DBMS 5.2
//EXPORT til CYPHER CALL apoc.export.cypher.all("all-plain.cypher", { format: "plain", useOptimizations: {type: "UNWIND_BATCH", unwindBatchSize: 20} }) YIELD file, batches, source, format, nodes, relationships, properties, time, rows, batchSize RETURN file
gives same error:
and the neo4j.conf-file for this DBMS has this added:
Our Perfino server crashed recently, logging since then the ERROR shown below. (There are some clues hinting to an OutOfMemory resulting in a corrupt db.)
It is suggested: 'Possible solution: use the recovery tool'. But neither the official perfino documentation nor the logs offer more instructions on how to proceed.
So here the question: how to use the recovery tool?
Stacktrace:
ERROR [collector] server: could not load transaction data
org.h2.jdbc.JdbcSQLException: File corrupted while reading record: "[495834] stream data key:64898 pos:11 remaining:0". Possible solution: use the recovery tool; SQL statement:
SELECT value FROM transaction_names WHERE id=? [90030-176]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:344)
at org.h2.message.DbException.get(DbException.java:178)
at org.h2.message.DbException.get(DbException.java:154)
at org.h2.index.PageDataIndex.getPage(PageDataIndex.java:242)
at org.h2.index.PageDataNode.getNextPage(PageDataNode.java:233)
at org.h2.index.PageDataLeaf.getNextPage(PageDataLeaf.java:400)
at org.h2.index.PageDataCursor.nextRow(PageDataCursor.java:95)
at org.h2.index.PageDataCursor.next(PageDataCursor.java:53)
at org.h2.index.IndexCursor.next(IndexCursor.java:278)
at org.h2.table.TableFilter.next(TableFilter.java:361)
at org.h2.command.dml.Select.queryFlat(Select.java:533)
at org.h2.command.dml.Select.queryWithoutCache(Select.java:646)
at org.h2.command.dml.Query.query(Query.java:323)
at org.h2.command.dml.Query.query(Query.java:291)
at org.h2.command.dml.Query.query(Query.java:37)
at org.h2.command.CommandContainer.query(CommandContainer.java:91)
at org.h2.command.Command.executeQuery(Command.java:197)
at org.h2.jdbc.JdbcPreparedStatement.executeQuery(JdbcPreparedStatement.java:109)
at com.mchange.v2.c3p0.impl.NewProxyPreparedStatement.executeQuery(NewProxyPreparedStatement.java:353)
at com.perfino.a.f.b.a.a(ejt:70)
at com.perfino.a.f.o.a(ejt:880)
at com.perfino.a.f.o.a(ejt:928)
at com.perfino.a.f.o.a(ejt:60)
at com.perfino.a.f.aa.a(ejt:783)
at com.perfino.a.f.o.a(ejt:847)
at com.perfino.a.f.o.a(ejt:792)
at com.perfino.a.f.o.a(ejt:787)
at com.perfino.a.f.o.a(ejt:60)
at com.perfino.a.f.ac.a(ejt:1011)
at com.perfino.b.a.b(ejt:68)
at com.perfino.b.a.c(ejt:82)
at com.perfino.a.f.o.a(ejt:1006)
at com.perfino.a.i.b.d.a(ejt:168)
at com.perfino.a.i.b.d.b(ejt:155)
at com.perfino.a.i.b.d.b(ejt:52)
at com.perfino.a.i.b.d.a(ejt:45)
at com.perfino.a.i.a.b.a(ejt:94)
at com.perfino.a.c.a.b(ejt:105)
at com.perfino.a.c.a.a(ejt:37)
at com.perfino.a.c.c.run(ejt:57)
at java.lang.Thread.run(Thread.java:745)
Notice: I couldn't recover my database with the procedure described below. I'm still keeping this post as reference, as the probability of a successful recovery will depend on how broken the database is, and there is no evidence that this procedure is invalid.
Perfino uses by default the H2 Database Engine as its persistence storage. H2 has a recovery tool and a run script tool to import sql statements:
# 1. Create a dump of the current database using the tool [1]
# This tool creates a 'config.h2.sql' and a 'perfino.h2.sql' db dump
cd ${PERFINO_DATA_DIR}
java -cp ${PATH_TO_H2_LIB}/h2*.jar org.h2.tools.Recover
# 2. Rename the corrupt database file to e.g. *bkp
mv perfino.h2.db perfino.h2.db.bkp
# 3. Import the dump from step 1, ignoring errors
java -cp ${PATH_TO_H2_LIB}/h2*.jar \
org.h2.tools.RunScript \
-url jdbc:h2:${PERFINO_DATA_DIR}/db/perfino \
-script perfino.h2.sql -checkResults
[1]: Perfino includes a version of the h2.jar under ${PERFINO_INSTALL_DIR}/lib/common/h2.jar. You could of course download the official jar and try with it, but in my case, I could only restore the database with the jar supplied with perfino.
This failed for me with a
Exception in thread "main" org.h2.jdbc.JdbcSQLException: Feature not supported: "Restore page store recovery SQL script can only be restored to a PageStore file".
If this happens to you, try:
# 1. Delete database and mv files
cd ${PERFINO_DATA_DIR}
rm perfino.h2.db perfino.mv.db
# 2. Create a PageStore database manually
touch perfino.h2.db
# 3. try with MV_STORE=FALSE on the url [2]
java -cp ${PATH_TO_H2_LIB}/h2*.jar \
org.h2.tools.RunScript \
-url jdbc:h2:${PERFINO_DATA_DIR}/db/perfino;MV_STORE=FALSE \
-checkResults \
-continueOnError
[2]: force h2 to recreate a pagestore db instead of the new storage engine (See this thread in metabase)
I found this article trying to repair a Confluence internal h2 database and this worked for me. Here's a shell script as a gist on my GitHub with what I did - you'll have to adjust for your environment.
Tried with Neo4j version 2.1.7 / 2.2.0
CQL contains =>
FOREACH (name in ["Hindu","Muslim","Christian","Jain"] | CREATE (:Religion {title:name}) );
I'm unable to import this using neo4j shell. err thrown : Unknown command 'foreach'.
I'm generating the CQL file through PHP.
Seems like neo4j-shell is not aware that a valid cypher statement can start with FOREACH. A simple workaround is to begin with a WITH:
with ["Hindu","Muslim","Christian","Jain"] as r
foreach (name in r|create (:Religion{title:name}));
I've been using the Neo4j batch loader for a while now and tonight started running into issues building my graph from a fresh database export. Running it yields the following:
> java -servjava -server -Xmx4G -jar ~/Dev/github.com/jexp/batch-import/target/batch-import-jar-with-dependencies.jar ./graph.db nodes.csv rels.csv node_index entities exact entities_idx.csv
Usage: Importer data/dir nodes.csv relationships.csv [node_index node-index-name fulltext|exact nodes_index.csv rel_index rel-index-name fulltext|exact rels_index.csv ....]
Using: Importer ./graph.db nodes.csv rels.csv node_index entities exact entities_idx.csv
Using Existing Configuration File
........................
Importing 2412268 Nodes took 4 seconds
.....................
Total import time: 9 seconds
Exception in thread "main" org.neo4j.graphdb.NotFoundException: id=2412269
at org.neo4j.unsafe.batchinsert.BatchInserterImpl.getNodeRecord(BatchInserterImpl.java:917)
at org.neo4j.unsafe.batchinsert.BatchInserterImpl.createRelationship(BatchInserterImpl.java:471)
at org.neo4j.batchimport.Importer.importRelationships(Importer.java:136)
at org.neo4j.batchimport.Importer.doImport(Importer.java:214)
at org.neo4j.batchimport.Importer.main(Importer.java:78)
I was able to successfully run the batch loader for the nodes.csv and rels.csv that are included in its own repository, so I'm thinking that the issue is somewhere in my rels.csv file. However, it's a pretty big file and I would like to know what id=2412269 means, as it seems like the best starting point for diagnosing the failure.
Any ideas?
_howard
This means that in the rels.csv file, you are trying to create a relationship for a node referenced by id = 2412269 . But no such node has been created in your nodes.csv file.
After working through the issue with the author of the importer, it turns out that the issue was that I had single, unescaped quotes in my nodes.csv file. So the rels.csv record was pointing to a node that could not be created in nodes.csv. Unfortunately, the error reported on the console was not exactly the error causing the issue.
I'm trying to export Neo4j graph (with 4318 nodes & 8145 Relationships) for testing using gremlin-groovy-2.3.0 using:
g = new Neo4jGraph('/tmp/mygraph');
g.saveGraphML('mygraph.xml');
but getting list of errors when typing the 1st command in console (gremlin-groovy-2.3.0/bin/gremlin.bat), Last line of error:
Error: Component: 'org.neo4j.kernel.StoreLockerLifecycleAdapter#5e1a7112 was successfully initialized, but failed to start
I have copied the Neo4j database (\data\graph.db) to Gremli (gremlin-groovy-2.3.0/bin/tmp/mygraph).
Sometimes it runs without error but then shows below error:
how to export ?