Importing a DB file in Neo4J - neo4j

I am trying to install the reactome DB into Neo4j so I can make a graph from it. I keep getting the same error message regardless of my syntax. The DB folder is stored in neo4J\bin\reactome.
I have basically been using this cypher command and all sorts of permutations of it:
neo4j-admin restore --from=neo4j\bin\reactome --database= reactome.graphdb –force=true"
and, regardless of how I do it, I get this error leading me to think it is something more than the syntax:
Neo.ClientError.Statement.SyntaxError: Invalid input 'e': expected <init> (line 1, column 1 (offset: 0))
"eo4j-admin restore --from=neo4j\bin\reactome --database= reactome.graphdb –force=true""
^

The neo4j-admin tool must be run from the command line -- it is NOT a Cypher operation.

To install the Reactome DB with Neo4j, click on 'open folder/import' in Neo4j. Then go up one directory, and go to '/data/databases/graph.db/'. Dump all the contents of the Reactome graph DB folder (uncompressed) inside the directory.

Related

neo4j_ something went wrong "RangeError. invalid string length" and the application cannot recover

I use neo4j desktop in linux (ubuntu 20.04) when i loaded the csv file in huge size (44 megabyte) appear this error:
something went wrong "RangeError. invalid string length" and the application cannot recover as seen in this pecture:
The Used Code:
LOAD CSV WITH HEADERS FROM 'file:///Email-EuAll.csv' as line
WITH toInteger(line.source) AS Source, toInteger(line.destination) AS Destination
MERGE (a:person {name:Source})
MERGE (b:person {name:Destination})
MERGE (a)-[:Freind ]-(b)
RETURN *
Can you share your statement?
It could also be that you tried to return too much data to neo4j browser? Just return nothing or count(*)
You can also try to run your statement in cypher-shell to see if it works there and narrow down the issue.

Neo4Js client error apoc.load.csv failed procedure. caused by ArrayIndexOutofBoundException 1

I am newbie to NEO4J. I am using neo4j version 3.5.6 community edition and apoc plugins version 3.5.0.4 .I have a CSV file in default import folder
NR_Nodes_Agent_I_20190331_tmp.csv. For testing purpose I have written a cypher query
CALL apoc.load.csv('NR_Nodes_Agent_I_20190331_tmp.csv') yield map as row return row;
but I am getting bellow error
Neo.ClientError.Procedure.ProcedureCallFailed: Failed to invoke procedure apoc.load.csv: Caused by: java.lang.ArrayIndexOutOfBoundsException: 1
I did some research on it but haven't successful to solve
I uncommented statement in neo4j.conf file
dbms.directories.import=import
dbms.security.allow_csv_import_from_file_urls=true
dbms.security.procedures.whitelist=apoc.coll.*,apoc.load.*,apoc.*
Make sure you have this setting in your neo4j.conf file:
apoc.import.file.enabled=true
Make sure your CSV file is well-formed.
For example, this CSV file would cause the same ArrayIndexOutOfBoundsException: 1 error message (notice that the single data row is missing a second value, since it has one fewer comma than the header):
a,b
1
On the other hand, this CSV file would work, even though the data row has no value after the comma):
a,b
1,
The query result would be:
╒════════════════╕
│"row" │
╞════════════════╡
│{"a":"1","b":""}│
└────────────────┘
And if the data row had a second value, like this:
a,b
1,2
Then the query result would be:
╒═════════════════╕
│"row" │
╞═════════════════╡
│{"a":"1","b":"2"}│
└─────────────────┘

Drop index with on nested property (with a dot) in Neo4j

I'm using Neo4j with Bolt and the Neo4j driver in Java. When I tried to run
the following command:
DROP INDEX ON :SingleBoardComputer(id.id)
Note that the name of the property is actually "id.id" (basically with a dot).
I have the following error:
Neo.ClientError.Statement.SyntaxError: Invalid input '\': expected whitespace or a list of property key names (line 1, column 36 (offset: 35))
"DROP INDEX ON :SingleBoardComputer(id.id)"
Is there any way to drop an index using the driver?
I'm using Neo4j 3.3.5 and the neo4j driver 1.6.1
I'm surprised because I can create the index without problems.
Thanks
The solution is to escape the field:
DROP INDEX ON :SingleBoardComputer(`id.id`)

Out of memory when creating large number of relationships

I'm new to Neo4J, and I want to try it on some data I've exported from MySQL. I've got the community edition running with neo4j console, and I'm entering commands using the neo4j-shell command line client.
I have 2 CSV files, that I use to create 2 types of node, as follows:
USING PERIODIC COMMIT
LOAD CSV WITH HEADERS FROM "file:/tmp/updates.csv" AS row
CREATE (:Update {update_id: row.id, update_type: row.update_type, customer_name: row.customer_name, .... });
CREATE INDEX ON :Update(update_id);
USING PERIODIC COMMIT
LOAD CSV WITH HEADERS FROM "file:/tmp/facts.csv" AS row
CREATE (:Fact {update_id: row.update_id, status: row.status, ..... });
CREATE INDEX ON :Fact(update_id);
This gives me approx 650,000 Update nodes, and 21,000,000 Fact nodes.
Once the indexes are online, I try to create relationships between the nodes, as follows:
MATCH (a:Update)
WITH a
MATCH (b:Fact{update_id:a.update_id})
CREATE (b)-[:FROM]->(a)
This fails with an OutOfMemoryError. I believe this is because Neo4J does not commit the transaction until it completes, keeping it in memory.
What can I do to prevent this? I have read about USING PERIODIC COMMIT but it appears this is only useful when reading the CSV, as it doesn't work in my case:
neo4j-sh (?)$ USING PERIODIC COMMIT
> MATCH (a:Update)
> WITH a
> MATCH (b:Fact{update_id:a.update_id})
> CREATE (b)-[:FROM]->(a);
QueryExecutionKernelException: Invalid input 'M': expected whitespace, comment, an integer or LoadCSVQuery (line 2, column 1 (offset: 22))
"MATCH (a:Update)"
^
Is it possible to create relationships in this way, between large numbers of existing nodes, or do I need to take a different approach?
The Out of Memory Exception is normal as it will try to commit it all at once and as you didn't provide it, I assume java heap settings are set as default (512m).
You can however, batch the process with kind of pagination, only I would prefer to use MERGE rather than CREATE in this case :
MATCH (a:Update)
WITH a
SKIP 0
LIMIT 50000
MATCH (b:Fact{update_id:a.update_id})
MERGE (b)-[:FROM]->(a)
Modify SKIP and LIMIT after each batch until your reach 650k update nodes.

Cypher query in Neoclipse cannot find nodes

I installed "neo4j-community-1.9-windows" and "neoclipse-1.9.1-win32.win32.x86_64" on one Win2008 R2 server. The Neo4j graph database files are located at "E:\neo4j_home" directory
E:\neo4j_home\bin
E:\neo4j_home\config
E:\neo4j_home\data
......
E:\neo4j_home\system
The graph database are running fine. I can see the nodes and the relationships at http://localhose:7474/, for example, I can see node 100 with http://localhost:7474/webadmin/#/data/search/100/
In Neoclipse, I set the conncetion URI to E:\neo4j_home\data or E:/neo4j_home/data and click "start/connect database" menu. The connection shows green. But when I do Cypher query in Neoclipse, it cannot find any nodes except node(0). For example:start n=node(100) return n; will get error: org.neo4j.cypher.EntityNotFoundException:Node 100 not found
Did I set wrong conncetion URI?
AFAIK the data URL should be E:\neo4j_home\data\graph.db instead of E:\neo4j_home\data.

Resources