How to improve performance of LOAD CSV in NEO4J - neo4j

I am using community edition of neo4j.I am trying to create 50000 nodes and 93400 relationships using CSV file.But the load csv command in neo4j is taking around 40 mins to create the nodes and relationships.
Using py2neo package in python to connect and run cypher queries.Load csv command looks similar to one below:
USING PERIODIC COMMIT LOAD CSV WITH HEADERS FROM "file:///Sample.csv" AS row WITH row
MERGE(animal:Animal { name:row.`ANIMAL_NAME`})
ON CREATE SET animal{name:row.`ANIMAL_NAME`,type:row.`TYPE`, status:row.`Status`, birth_date:row.`DATE`}
ON MATCH SET animal +={name:row.`ANIMAL_NAME`,type:row.`TYPE`,status:row.`Status`,birth_date:row.`DATE`}
MERGE (person:Person { name:row.`PERSON_NAME`})
ON CREATE SET person ={name:row.`PERSON_NAME` age:row.`AGE`, address:row.`Address`, birth_date:row.`PERSON_DATE`}
ON MATCH SET person += { name:row.`PERSON_NAME`, age:row.`AGE`, address:row.`Address`, birth_date:row.`PERSON_DATE`}
MERGE (person)-[:OWNS]->(animal);
Infrastructure Details:
dbms.memory.heap.max_size=16384M
dbms.memory.heap.initial_size=2048M
dbms.memory.pagecache.size=512M
neo4j_version:3.3.9
How would I get it to work faster.Thanks in advance

Ideally, you should be using the lastest neo4j version, as there have been many performance improvements since 3.3.9. Since you already have indexes on :Animal(name) and :Person(name), the other main issue is probably that the Cypher planner is generating an expensive Eager operation (at least in neo4j 4.0.3) for your query. Whenever you have performance issues, you. should use EXPLAIN or PROFILE to see the operations that the Cypher planner generates.
Try using this simpler query (which should do the same thing as yours). Using EXPLAIN in neo4j 4.0.3, this query does not use the Eager operation:
:auto USING PERIODIC COMMIT LOAD CSV WITH HEADERS FROM "file:///Test.csv" AS row
MERGE(animal:Animal {name: row.`ANIMAL_NAME`})
SET animal += {type:row.`TYPE`, status:row.`Status`, birth_date:row.`DATE`}
MERGE (person:Person { name:row.`PERSON_NAME`})
SET person += {age:row.`AGE`, address:row.`Address`, birth_date:row.`PERSON_DATE`}
MERGE (person)-[:OWNS]->(animal);
The :auto command is required in neo4j 4.x when using USING PERIODIC COMMIT.

Related

Can I load in nodes and relationships from a csv file using 1 cypher command?

I have 2 csv files which I am trying to load into a Neo4j database using cypher: drivers.csv which holds every formula 1 driver and lap times.csv which stores every lap ever raced in F1.
I have managed to load in all of the nodes, although the lap times file is very large so it took quite a long time! I then tried to add relationships after, but there is so many that needs to be added that I gave up on it waiting (it was taking multiple days and still had not loaded in fully).
I’m pretty sure there is a way to load in the nodes and relationships at the same time, which would allow me to use periodic commit for the relationships which I cannot do right now. Essentially I just need to combine the 2 commands into one and after some attempts I can’t seem to work out how to do it?
// load in the lap_times.csv, changing the variable names - about half million nodes (takes 3-4 days)
PERIODIC COMMIT 25000
LOAD CSV WITH HEADERS from 'file:///lap_times.csv'
AS row
MERGE (lt: lapTimes {raceId: row.raceId, driverId: row.driverId, lap: row.lap, position: row.position, time: row.time, milliseconds: row.milliseconds})
RETURN lt;
// add a relationship between laptimes, drivers and races - takes 3-4 days
MATCH (lt:lapTimes),(d:Driver),(r:race)
WHERE lt.raceId = r.raceId AND lt.driverId = d.driverId
MERGE (d)-[rel8:LAPPING_AT]->(lt)
MERGE (r)-[rel9:TIMED_LAP]->(lt)
RETURN type(rel8), type(rel9)
Thanks in advance for any help!
You should review the documentation for indexes here:
https://neo4j.com/docs/cypher-manual/current/administration/indexes-for-search-performance/
Basically, indexes, once created, allow quick lookups of nodes of a given label, for the given property or properties. If you DON'T have an index and you do a MATCH or MERGE of a node, then for every row of that MATCH or MERGE, it has to do a label scan of all nodes of the given label and check all of their properties to find the nodes, and that becomes very expensive, especially when loading CSVs because those operations are likely happening for each row in the CSV.
For your :lapTimes nodes (though we would recommend you use singular labels in most cases), if there are none of them in your graph to start with, then a CREATE instead of a MERGE is fine. You may want a composite index on :lapTimes(raceId, driverId, lap), since that should uniquely identify the node, if you need to look it up later. Using CREATE instead of MERGE here should process much much faster.
Your second query should be MATCHing on :lapTimes nodes (label scan), and from each doing an index lookup on the :race and :driver nodes, so indexes are key here for performance.
You need indexes on: :race(raceId) and :Driver(driverId).
MATCH (lt:lapTimes)
WITH lt, lt.raceId as raceId, lt.driverId as driverId
MATCH (d:Driver), (r:race)
WHERE r.raceId = raceId AND d.driverId = driverId
MERGE (d)-[:LAPPING_AT]->(lt)
MERGE (r)-[:TIMED_LAP]->(lt)
You might consider CREATE instead of MERGE for the relationships, if you know there are no duplicate entries.
I removed your RETURN because returning the types isn't useful information.
Also, consider using consistent cases for your node labels, and that you are using the same case between the labels in your graph and the indexes you create.
Also, you would probably want to batch these changes instead of trying to process them all at once.
If you install APOC Procedures you can make use of apoc.periodic.iterate(), which can be used to batch changes, which will be faster and easier on your heap. You will still need indexes first.
CALL apoc.periodic.iterate("
MATCH (lt:lapTimes)
WITH lt, lt.raceId as raceId, lt.driverId as driverId
MATCH (d:Driver), (r:race)
WHERE r.raceId = raceId AND d.driverId = driverId
RETURN lt, d, ir",
"MERGE (d)-[:LAPPING_AT]->(lt)
MERGE (r)-[:TIMED_LAP]->(lt)", {}) YIELD batches, total, errorMessages
RETURN batches, total, errorMessages
Single CSV load
If you want to handle everything all at once in a single CSV load, you can do that, but again you will need indexes first. Here's what you'll need at a minimum:
CREATE INDEX ON :Driver(driverId);
CREATE INDEX ON :Race(raceId);
After those are created, you can use this, assuming you are starting from scratch (I fixed the case of your labels and made them singular:
USING PERIODIC COMMIT 25000
LOAD CSV WITH HEADERS from 'file:///lap_times.csv' AS row
MERGE (d:Driver {driverId:row.driverId})
MERGE (r:Race {raceId:row.raceId})
CREATE (lt:LapTime {raceId: row.raceId, driverId: row.driverId, lap: row.lap, position: row.position, time: row.time, milliseconds: row.milliseconds})
CREATE (d)-[:LAPPING_AT]->(lt)
CREATE (r)-[:TIMED_LAP]->(lt)

Cypher Import from CSV to Neo4J - How To Improve Performance

I am importing the following to Neo4J:
categories.csv
CategoryName1
CategoryName2
CategoryName3
...
categories_relations.csv
category_parent category_child
CategoryName3 CategoryName10
CategoryName32 CategoryName41
...
Basically, categories_relations.csv shows parent-child relationships between the categories from categories.csv.
I imported the first csv file with the following query which went well and pretty quickly:
USING PERIODIC COMMIT
LOAD CSV FROM 'file:///categories.csv' as line
CREATE (:Category {name:line[0]})
Then I imported the second csv file with:
USING PERIODIC COMMIT
LOAD CSV FROM 'file:///categories_relations.csv' as line
MATCH (a:Category),(b:Category)
WHERE a.name = line[0] AND b.name = line[1]
CREATE (a)-[r:ISPARENTOF]->(b)
I have about 2 million nodes.
I tried executing the 2nd query and it is taking quite long. Can I make the query execute more quickly?
Confirm you are matching on right property. You are setting only one property for Category node i.e. name while creating
categories. But you are matching on property id in your second
query to create the relationships between categories.
For executing the 2nd query faster you can add an index on the property (here id) which you are matching Category nodes on.
CREATE INDEX ON :Category(id)
If it still takes time, You can refer my answer to Load CSV here

"Unable to rollback transaction" error in Neo4j

I am trying to run the following query to create my nodes and relationships from a .csv file that I have:
USING PERIODIC COMMIT 1000 LOAD CSV WITH HEADERS FROM 'file:///LoanStats3bEDITED.csv' AS line
//USING PERIODIC COMMIT 1000 makes sure we don't get a memory error
//creating the nodes with their properties
//member node
CREATE (member:Person{member_id:TOINT(line.member_id)})
//Personal information node
CREATE (personalInformation:PersonalInformation{addr_state:line.addr_state})
//recordHistory node
CREATE (recordHistory:RecordHistory{delinq_2yrs:TOFLOAT(line.delinq_2yrs),earliest_cr_line:line.earliest_cr_line,inq_last_6mths:TOFLOAT(line.inq_last_6mths),collections_12_mths_ex_med:TOFLOAT(line.collections_12_mths_ex_med),delinq_amnt:TOFLOAT(line.delinq_amnt),percent_bc_gt_75:TOFLOAT(line.percent_bc_gt_75), pub_rec_bankruptcies:TOFLOAT(line.pub_rec_bankruptcies), tax_liens:TOFLOAT(line.tax_liens)})
//Loan node
CREATE (loan:Loan{funded_amnt:TOFLOAT(line.funded_amnt),term:line.term, int_rate:line.int_rate, installment:TOFLOAT(line.installment),purpose:line.purpose})
//Customer Finances node
CREATE (customerFinances:CustomerFinances{emp_length:line.emp_length,verification_status_joint:line.verification_status_joint,home_ownership:line.home_ownership, annual_inc:TOFLOAT(line.annual_inc), verification_status:line.verification_status,dti:TOFLOAT(line.dti), annual_inc_joint:TOFLOAT(line.annual_inc_joint),dti_joint:TOFLOAT(line.dti_joint)})
//Accounts node
CREATE (accounts:Accounts{revol_util:line.revol_util,tot_cur_bal:TOFLOAT(line.tot_cur_bal)})
//creating the relationships
CREATE UNIQUE (member)-[:FINANCIAL{issue_d:line.issue_d,loan_status:line.loan_status, application_type:line.application_type}]->(loan)
CREATE UNIQUE (customerFinances)<-[:FINANCIAL]-(member)
CREATE UNIQUE (accounts)<-[:FINANCIAL{open_acc:TOINT(line.open_acc),total_acc:TOFLOAT(line.total_acc)}]-(member)
CREATE UNIQUE (personalInformation)<-[:PERSONAL]-(member)
CREATE UNIQUE (recordHistory)<-[:HISTORY]-(member)
However, I keep getting the following error:
Unable to rollback transaction
What does this mean and how can I fix my query so it can be run successfully?
I am now getting the following error:
GC overhead limit exceeded
I think you are out of memory.
Solutions:
Use neo4j-import.batch
Split your querys.
Make constraints to speed up the querys.
Why do you need the create unique? You could just use create if your csv is clean, or use merge.
I think it could also be faster if you execute the query not in the browser but in shell.
Download more ram :-)
If you really need the uniqueness of relationships, replace create unique with merge.
Also, your repeated MERGE operation on FINANCIAL causes Cypher to materialize the whole result before each of the 3 operations with an Eager operator so that it doesn't run into endless loops.
That's why the periodic commit is not going into effect, causing the whole intermediate result to use too much memory.
Something else you could do is to use the APOC library and apoc.periodic.iterate instead for batching.
call apoc.periodic.iterate("
LOAD CSV WITH HEADERS FROM 'file:///LoanStats3bEDITED.csv' AS line RETURN line
","
//member node
CREATE (member:Person{member_id:TOINT(line.member_id)})
//Personal information node
CREATE (personalInformation:PersonalInformation{addr_state:line.addr_state})
//recordHistory node
CREATE (recordHistory:RecordHistory{delinq_2yrs:TOFLOAT(line.delinq_2yrs),earliest_cr_line:line.earliest_cr_line,inq_last_6mths:TOFLOAT(line.inq_last_6mths),collections_12_mths_ex_med:TOFLOAT(line.collections_12_mths_ex_med),delinq_amnt:TOFLOAT(line.delinq_amnt),percent_bc_gt_75:TOFLOAT(line.percent_bc_gt_75), pub_rec_bankruptcies:TOFLOAT(line.pub_rec_bankruptcies), tax_liens:TOFLOAT(line.tax_liens)})
//Loan node
CREATE (loan:Loan{funded_amnt:TOFLOAT(line.funded_amnt),term:line.term, int_rate:line.int_rate, installment:TOFLOAT(line.installment),purpose:line.purpose})
//Customer Finances node
CREATE (customerFinances:CustomerFinances{emp_length:line.emp_length,verification_status_joint:line.verification_status_joint,home_ownership:line.home_ownership, annual_inc:TOFLOAT(line.annual_inc), verification_status:line.verification_status,dti:TOFLOAT(line.dti), annual_inc_joint:TOFLOAT(line.annual_inc_joint),dti_joint:TOFLOAT(line.dti_joint)})
//Accounts node
CREATE (accounts:Accounts{revol_util:line.revol_util,tot_cur_bal:TOFLOAT(line.tot_cur_bal)})
//creating the relationships
MERGE (member)-[:FINANCIAL{issue_d:line.issue_d,loan_status:line.loan_status, application_type:line.application_type}]->(loan)
MERGE (customerFinances)<-[:FINANCIAL]-(member)
MERGE (accounts)<-[:FINANCIAL{open_acc:TOINT(line.open_acc),total_acc:TOFLOAT(line.total_acc)}]-(member)
MERGE (personalInformation)<-[:PERSONAL]-(member)
MERGE (recordHistory)<-[:HISTORY]-(member)
", {batchSize:1000, iterateList:true})

Out of memory when creating large number of relationships

I'm new to Neo4J, and I want to try it on some data I've exported from MySQL. I've got the community edition running with neo4j console, and I'm entering commands using the neo4j-shell command line client.
I have 2 CSV files, that I use to create 2 types of node, as follows:
USING PERIODIC COMMIT
LOAD CSV WITH HEADERS FROM "file:/tmp/updates.csv" AS row
CREATE (:Update {update_id: row.id, update_type: row.update_type, customer_name: row.customer_name, .... });
CREATE INDEX ON :Update(update_id);
USING PERIODIC COMMIT
LOAD CSV WITH HEADERS FROM "file:/tmp/facts.csv" AS row
CREATE (:Fact {update_id: row.update_id, status: row.status, ..... });
CREATE INDEX ON :Fact(update_id);
This gives me approx 650,000 Update nodes, and 21,000,000 Fact nodes.
Once the indexes are online, I try to create relationships between the nodes, as follows:
MATCH (a:Update)
WITH a
MATCH (b:Fact{update_id:a.update_id})
CREATE (b)-[:FROM]->(a)
This fails with an OutOfMemoryError. I believe this is because Neo4J does not commit the transaction until it completes, keeping it in memory.
What can I do to prevent this? I have read about USING PERIODIC COMMIT but it appears this is only useful when reading the CSV, as it doesn't work in my case:
neo4j-sh (?)$ USING PERIODIC COMMIT
> MATCH (a:Update)
> WITH a
> MATCH (b:Fact{update_id:a.update_id})
> CREATE (b)-[:FROM]->(a);
QueryExecutionKernelException: Invalid input 'M': expected whitespace, comment, an integer or LoadCSVQuery (line 2, column 1 (offset: 22))
"MATCH (a:Update)"
^
Is it possible to create relationships in this way, between large numbers of existing nodes, or do I need to take a different approach?
The Out of Memory Exception is normal as it will try to commit it all at once and as you didn't provide it, I assume java heap settings are set as default (512m).
You can however, batch the process with kind of pagination, only I would prefer to use MERGE rather than CREATE in this case :
MATCH (a:Update)
WITH a
SKIP 0
LIMIT 50000
MATCH (b:Fact{update_id:a.update_id})
MERGE (b)-[:FROM]->(a)
Modify SKIP and LIMIT after each batch until your reach 650k update nodes.

Load csv merge performance

I have a performance issue with bulk insert into neo4j.
I have a csv file with 400k rows which produces about 3.5 million rows, and I use LOAD CSV command, with the latest version on neo4j.
I've noticed that when I user Create statement, the load takes about 4 minutes, and without indexes at all- about 3.5 minutes.
My first question, is whether this is the normal rate of nodes/ min.
Now, my real problem, is that I need to use merge, for data integrity reasons, and when I use it, it can take even 24 hours, together with indexes.
So 2 additional questions will be:
Is the LOAD CSV recommended for the best performance load,
and also:
What can I do do about this performance issue?
EDIT:
here is the query:
LOAD CSV WITH HEADERS FROM 'file:///import.csv' AS line FIELDTERMINATOR '|'
MERGE (session :Session { session:line.session })
MERGE (hit :Hit { key:line.key,date_time:line.date_time,session:line.session })
MERGE (user :User { id:line.user_id })
MERGE (session2 :Session2 { session2:line.session2 })
MERGE (country :Country{ name:line.country})
MERGE (tv :TV { name:tv.Model })
MERGE (transfer_protocol :Protocol { name:line.transfer_protocol })
MERGE (os :OS { name:line.os_name ,version:line.os_version, row_key:line.os_name+line.os_version})
Sample: session_guid|hit_key_guid|useridguid|session2_guid|PANASONIC|TCP|ANDROID|5.0
the session,user,session2,country,tv,transfer_protocol and os has unique constraint, and hit has an index
**session1 and session2 can have many hits (1 to 100, average 5)
hit_key_guid is different for each csv line
it's running really slow- pretty strong machine, and each 1000 rows can take up to 10 seconds.
also checked with the profiler, and no "Eager"
thanks
Lior
You should share your data model, your indexes, your LOAD CSV query and also the profile output. Are you using PERIODIC commit?
Make sure that you don't run into the Eager issue, see here:
http://neo4j.com/developer/guide-import-csv/#_load_csv_for_medium_sized_datasets
http://www.markhneedham.com/blog/2014/10/23/neo4j-cypher-avoiding-the-eager/
In general for a dataset your size LOAD CSV is ok, from 10M rows I'd probably switch to the import-tool.
It appears that the server side code, didn't create the indexes properly, and once they were created, the load done in good performance

Resources