I am still trying to resolve my speed issue (shown here: Cypher MATCH query speed).
One thing I noticed is that while I am importing the data with a unique constraint (proven by the below).
neo4j-sh (?)$ create index on :Person(username);
QueryExecutionKernelException: Label 'Person' and property 'username'
have a unique constraint defined on them, so an index is already
created that matches this.
When I try to view the indexes in shell, I get the following:
neo4j-sh (?)$ index --indexes
Node indexes:
Relationship indexes:
Are autogenerated indexes not supposed to show up? How can I verify that the unique constraint is in-fact indexing the username?
The main problem (as shown in the above link) is that the below simple query is taking 36 seconds (with an eager call) and twice that time when switched to a non-eager call.
USING PERIODIC COMMIT 15000
LOAD CSV WITH HEADERS FROM "file:d:/messages.csv" AS line
MATCH (a:Geotagged { username: line.sender }) - [r:MSGED] -> (b:Geotagged { username: line.recipient })
RETURN NULL;
Note, this is excluding the SET call I was originally trying to use, I removed it and the MATCH alone is taking forever.
Additionally, I have also increased the pagecache to several times what I should need and saw no change.
EDIT 1
The nodes labeled with 'Geotagged' are ALSO labeled as 'Person'. All nodes are 'Person', some just happen to also be 'Geotagged'.
Have you used an uniqueness constrain with the Geotagged label as well as the Person label? I found that a uniqueness constrain on both labels increased speed greatly.
You are using the index command for legacy indexes, use schema to list schema indexes and constraints.
Also if you match by :Geotagged(username) you have to have an index for that combination:
create index on :Geotagged(username);
or match on :Person(username) instead.
Related
I have 2 csv files which I am trying to load into a Neo4j database using cypher: drivers.csv which holds every formula 1 driver and lap times.csv which stores every lap ever raced in F1.
I have managed to load in all of the nodes, although the lap times file is very large so it took quite a long time! I then tried to add relationships after, but there is so many that needs to be added that I gave up on it waiting (it was taking multiple days and still had not loaded in fully).
I’m pretty sure there is a way to load in the nodes and relationships at the same time, which would allow me to use periodic commit for the relationships which I cannot do right now. Essentially I just need to combine the 2 commands into one and after some attempts I can’t seem to work out how to do it?
// load in the lap_times.csv, changing the variable names - about half million nodes (takes 3-4 days)
PERIODIC COMMIT 25000
LOAD CSV WITH HEADERS from 'file:///lap_times.csv'
AS row
MERGE (lt: lapTimes {raceId: row.raceId, driverId: row.driverId, lap: row.lap, position: row.position, time: row.time, milliseconds: row.milliseconds})
RETURN lt;
// add a relationship between laptimes, drivers and races - takes 3-4 days
MATCH (lt:lapTimes),(d:Driver),(r:race)
WHERE lt.raceId = r.raceId AND lt.driverId = d.driverId
MERGE (d)-[rel8:LAPPING_AT]->(lt)
MERGE (r)-[rel9:TIMED_LAP]->(lt)
RETURN type(rel8), type(rel9)
Thanks in advance for any help!
You should review the documentation for indexes here:
https://neo4j.com/docs/cypher-manual/current/administration/indexes-for-search-performance/
Basically, indexes, once created, allow quick lookups of nodes of a given label, for the given property or properties. If you DON'T have an index and you do a MATCH or MERGE of a node, then for every row of that MATCH or MERGE, it has to do a label scan of all nodes of the given label and check all of their properties to find the nodes, and that becomes very expensive, especially when loading CSVs because those operations are likely happening for each row in the CSV.
For your :lapTimes nodes (though we would recommend you use singular labels in most cases), if there are none of them in your graph to start with, then a CREATE instead of a MERGE is fine. You may want a composite index on :lapTimes(raceId, driverId, lap), since that should uniquely identify the node, if you need to look it up later. Using CREATE instead of MERGE here should process much much faster.
Your second query should be MATCHing on :lapTimes nodes (label scan), and from each doing an index lookup on the :race and :driver nodes, so indexes are key here for performance.
You need indexes on: :race(raceId) and :Driver(driverId).
MATCH (lt:lapTimes)
WITH lt, lt.raceId as raceId, lt.driverId as driverId
MATCH (d:Driver), (r:race)
WHERE r.raceId = raceId AND d.driverId = driverId
MERGE (d)-[:LAPPING_AT]->(lt)
MERGE (r)-[:TIMED_LAP]->(lt)
You might consider CREATE instead of MERGE for the relationships, if you know there are no duplicate entries.
I removed your RETURN because returning the types isn't useful information.
Also, consider using consistent cases for your node labels, and that you are using the same case between the labels in your graph and the indexes you create.
Also, you would probably want to batch these changes instead of trying to process them all at once.
If you install APOC Procedures you can make use of apoc.periodic.iterate(), which can be used to batch changes, which will be faster and easier on your heap. You will still need indexes first.
CALL apoc.periodic.iterate("
MATCH (lt:lapTimes)
WITH lt, lt.raceId as raceId, lt.driverId as driverId
MATCH (d:Driver), (r:race)
WHERE r.raceId = raceId AND d.driverId = driverId
RETURN lt, d, ir",
"MERGE (d)-[:LAPPING_AT]->(lt)
MERGE (r)-[:TIMED_LAP]->(lt)", {}) YIELD batches, total, errorMessages
RETURN batches, total, errorMessages
Single CSV load
If you want to handle everything all at once in a single CSV load, you can do that, but again you will need indexes first. Here's what you'll need at a minimum:
CREATE INDEX ON :Driver(driverId);
CREATE INDEX ON :Race(raceId);
After those are created, you can use this, assuming you are starting from scratch (I fixed the case of your labels and made them singular:
USING PERIODIC COMMIT 25000
LOAD CSV WITH HEADERS from 'file:///lap_times.csv' AS row
MERGE (d:Driver {driverId:row.driverId})
MERGE (r:Race {raceId:row.raceId})
CREATE (lt:LapTime {raceId: row.raceId, driverId: row.driverId, lap: row.lap, position: row.position, time: row.time, milliseconds: row.milliseconds})
CREATE (d)-[:LAPPING_AT]->(lt)
CREATE (r)-[:TIMED_LAP]->(lt)
I'm trying to load some data into neo4j from csv files, and it seems a unique constraint error is triggered when it shouldn't. In particular, I created a contraint using
CREATE CONSTRAINT ON (node:`researcher`) ASSERT node.`id_patstats` IS UNIQUE;
Then, after inserting some data in neo4j, if I run (in neo4j browser)
MATCH (n:researcher {id_patstats: "2789"})
RETURN n
I get no results (no changes, no records), but if I run
USING PERIODIC COMMIT
LOAD CSV WITH HEADERS FROM
'file:///home/manu/proyectos/PTL_RDIgraphs/rdigraphs/datamanager/tmp_patents/person906.csv' AS line
MERGE (n:researcher {`name` : line.`person_name`})
SET n.`id_patstats` = line.`person_id`;
I get
Neo.ClientError.Schema.ConstraintValidationFailed: Node(324016)
already exists with label researcher and property id_patstats =
'2789'
and the content of file person906.csv is
manu#cochi tmp_patents $cat person906.csv
person_id,person_name,doc_std_name,doc_std_name_id
2789,"li, jian",LI JIAN,2390
(this a minimum non working example extracted from a larger dataset; also, in the original "person906.csv" I made sure that "id_patstats" is really unique).
Any clue?
EDIT:
Still struggling with this...
If I run
MATCH (n)
WHERE EXISTS(n.id_patstats)
RETURN DISTINCT "node" as entity, n.id_patstats AS id_patstats
LIMIT 25
UNION ALL
MATCH ()-[r]-()
WHERE EXISTS(r.id_patstats)
RETURN DISTINCT "relationship" AS entity, r.id_patstats AS id_patstats
LIMIT 25
(clicking in the neo4j browser to get some examples of the id_patstats property) I get
(no changes, no records)
that is, id_patstats property is not set anywhere. Moreover
MATCH (n:researcher {`name` : "li, jian"})
SET n.`id_patstats` = XXX;
this will always trigger an error regardless of XXX, which (I guess) means the actual problem is that the name "li, jian" is already present. Although I didn't set any constraint on the name property, I'm guessing neo4j goes like this: you are trying to set a UNIQUE property on a node matching a property (name) that is not necessarily UNIQUE; hence that match could yield several nodes and I can't set the same UNIQUE property on all of them...so I won't even try
At least two of your researchers have the same name. You shouldn't MERGE by name and then add id as a property. You should MERGE by id and add the name as a property and it will work fine.
USING PERIODIC COMMIT
LOAD CSV WITH HEADERS FROM
'file:///home/manu/proyectos/PTL_RDIgraphs/rdigraphs/datamanager/tmp_patents/person906.csv' AS line
MERGE (n:researcher {`id_patstats`:line.`person_id`})
SET n.name`=line.`person_name`;
I am having a problem creating a JOIN (MATCH) relationship. I am using the Neo4j example for the Northwinds graph database load as my learning example.
I have 2 simple CSV files that I successfully loaded via LOAD CSV FROM HEADERS. I then set 2 indexes, one for each entity. My final step is to create the MATCH (JOIN) statement. This is where I am having problems.
After running the script, instead of telling me how many relationships it created, my return message is "(no changes, no records)". Here are my script lines:
LOAD CSV WITH HEADERS FROM 'FILE:///TestProducts.csv' AS row
CREATE (p:Product)
SET p = row
Added 113 labels, created 113 nodes, set 339 properties, completed after 309 ms.
LOAD CSV WITH HEADERS FROM 'FILE:///TestSuppliers.csv' AS row
CREATE (s:Supplier)
SET s = row
Added 23 labels, created 23 nodes, set 46 properties, completed after 137 ms.
CREATE INDEX ON :Product(productID)
Added 1 index, completed after 20 ms.
CREATE INDEX ON :Supplier(supplierID)
Added 1 index, completed after 2 ms.
MATCH (p:Product),(s:Supplier)
WHERE p.supplierID = s.supplierID
CREATE (s)-[:SUPPLIES]->(p)
(no changes, no records)
Why? If I run the Northwinds example, with the example files, it works. It says 77 relationships were created. Also is there any way to see database structure? How can I debug this issue? Any help is greatly appreciated.
I think you may be using the wrong casing for the property names. The NorthWind data uses uppercased first letters for its property names.
Try using ProductID and SupplierID in your indexes and the MATCH clause.
Thanks for all the suggestions. With Neo4j there are always multiple ways to solve the problem. I did some digging and found a rather simple solution.
MATCH (a)-[r1]->()-[r3]->(b) CREATE UNIQUE (a)-[:REQUIRES]-(b);
Literal Code (for me) is:
MATCH (a:Application)-[:CONSISTS_OF]->()-[:USES]->(o:Object) CREATE UNIQUE (a)-[:REQUIRES]-(o);
This grouped the relationships (n2) and created a virtual relationship, making the individual n2 nodes redundant for the query.
Namaste Everyone!
Dean
I am trying to load large dataset into neo4j-3 and looking for the options. I found one neo4j-import but the problem with that is it is for initial load only. I have to load 2M records around every week.
I tried loading through shell but having some performance issue, I tried following.
1) Creating constraint upfront.
2) Creating Node and relationships in separate query.
3) Heap space 8G
4) dbms.memory.pagecache 4G
Many times the import just hangs and does nothing for hours.
Edit - CSV load being executed:
USING PERIODIC COMMIT 5000
LOAD CSV WITH HEADERS
FROM "file:///my_sds_39_joe.csv"
AS row
OPTIONAL MATCH (per:Person {UID : "Person."+row.player_cardnum})
WHERE per IS NULL
MERGE (p:Person {CardNumber : row.player_cardnum})
ON CREATE SET p.Creation Date = timestamp(), p.Modification Date = timestamp() ;
EDIT
On a second look, seems like you're trying to implement some kind of conditional logic to your insert.
It looks like what you're trying to do is figure out if a :Person exists with a UID (derived from some concatenation with row.player_cardnum), and in the case where that :Person doesn't exist and the match fails, MERGE a :Person with the CardNumber given by row.player_cardnum.
If this is your goal, you're ALMOST there with your query. The problem is with your WHERE clause.
Understand that WHERE clauses are linked with a preceding MATCH, OPTIONAL MATCH, or WITH, and only affects the linked clause.
With that WHERE on that OPTIONAL MATCH, per will always be null, but more importantly, your row will still exist, and the following MERGE will ALWAYS take place for all rows in the CSV. This is probably the source of your slowdown, as it's creating new :Person nodes for all rows.
If you're trying to null out the row completely when the OPTIONAL MATCH hits on an existing :Person (so the MERGE won't happen in that case), you'll need to add a WITH clause, and make sure your WHERE clause is applied to it instead of the OPTIONAL MATCH.
Additionally, make sure that you have either unique constraints or indexes on Person.UID and Person.CardNumber. As for the UID match, I've heard that indexes are not used when there's some kind of string concatenation of the thing you're matching upon, so you may need to assemble it first and pass it in with a WITH.
Your final query would look like this:
USING PERIODIC COMMIT 5000
LOAD CSV WITH HEADERS
FROM "file:///my_sds_39_joe.csv"
AS row
// first build the UID so we can take advantage of the index
WITH row, "Person." + row.player_cardnum AS UID
OPTIONAL MATCH (per:Person {UID : UID})
// the WHERE now applies to the WITH, which will filter out and null out the row when an OPTIONAL MATCH is found
WITH row, per
WHERE per IS NULL
MERGE (p:Person {CardNumber : row.player_cardnum})
ON CREATE SET p.Creation Date = timestamp(), p.Modification Date = timestamp() ;
I'm using Neo4j 2.0.0-M06. Just learning Cypher and reading the docs. In my mind this query would work, but I should be so lucky...
I'm importing tweets to a mysql-database, and from there importing them to neo4j. If a tweet is already existing in the Neo4j database, it should be updated.
My query:
MATCH (y:Tweet:Socialmedia) WHERE
HAS (y.tweet_id) AND y.tweet_id = '123'
CREATE UNIQUE (n:Tweet:Socialmedia {
body : 'This is a tweet', tweet_id : '123', tweet_userid : '321', tweet_username : 'example'
} )
Neo4j says: This pattern is not supported for CREATE UNIQUE
The database is currently empty on nodes with the matching labels, so there are no tweets what so ever in the Neo4j database.
What is the correct query?
You want to use MERGE for this query, along with a unique constraint.
CREATE CONSTRAINT on (t:Tweet) ASSERT t.tweet_id IS UNIQUE;
MERGE (t:Tweet {tweet_id:'123'})
ON CREATE
SET t:SocialMedia,
t.body = 'This is a tweet',
t.tweet_userid = '321',
t.tweet_username = 'example';
This will use an index to lookup the tweet by id, and do nothing if the tweet exists, otherwise it will set those properties.
I would like to point that one can use a combination of
CREATE CONSTRAINT and then a normal
CREATE (without UNIQUE)
This is for cases where one expects a unique node and wants to throw an exception if the node unexpectedly exists. (Far cheaper than looking for the node before creating it).
Also note that MERGE seems to take more CPU cycles than a CREATE. (It also takes more CPU cycles even if an exception is thrown)
An alternative scenario covering CREATE CONSTRAINT, CREATE and MERGE (though admittedly not the primary purpose of this post).