cypher statement returns (no changes, no rows) - neo4j

I've watched Nicole White's awesome youtube “Using LOAD CSV in the Real World” and decided to re-create the neo4j data using the same method.
I’ve cloned her git repo on this subject and have been working this example on the community version of neo4j on my Mac.
I’m stepping thru the load.cql file one command at a time pasting each command into the command window.
Things are going pretty good- I’ve got a bunch of nodes created. To deal with
null values for sub_products and sub_issues in the master file, I created
two other csv files: sub_issues.csv and sub_products.csv as described in the video.
But when I try reading ether these files, I get "(no changes, no rows)”
somehow I get the impression there is something wrong…
below is the actual command sequence I used for the incremental read.
// Load.
USING PERIODIC COMMIT
LOAD CSV WITH HEADERS
FROM 'file:///Volumes/microSD/neo4j-complaints/sub_issue.csv' AS line
WITH line
WHERE line.`Sub-issue` <> '' AND
line.`Sub-issue` IS NOT NULL
MATCH (complaint:Complaint { id: TOINT(line.`Complaint ID`) })
MATCH (complaint)-[:WITH]->(issue:Issue)
MERGE (subIssue:SubIssue { name: UPPER(line.`Sub-issue`) })
MERGE (subIssue)-[:IN_CATEGORY]->(issue)
CREATE (complaint)-[:WITH]->(subIssue)
;

Strip out some of the later statements and do a "RETURN identifier1, identifier2" etc. to see what the engine is doing.

Related

Neo.ClientError.Statement.EntityNotFound

I just created a new folder name-Test and started Neo4j server.
When i run the below script, i get the error - "Neo.ClientError.Statement.EntityNotFound"
and a message "Node with id 0"
start root=node(0)
create
(tatham {Name:'Tatham'}),
(tom {Name:'Tom'}),
(pat {Name:'Pat'}),
(chrissy {Name:'Chrissy'}),
(sailing {Name:'Sailing'}),
(mtb {Name:'MTB'}),
(rowing {Name:'Rowing'}),
(tennis {Name:'Tennis'}),
root-[:HAS_USER]->tatham,
root-[:HAS_USER]->tom,
root-[:HAS_USER]->pat,
root-[:HAS_USER]->chrissy,
tatham-[:FRIEND]->tom,
tom-[:FRIEND]->pat,
tatham-[:FRIEND]->chrissy,
tatham-[:LIKES]->sailing,
tatham-[:LIKES]->mtb,
tom-[:LIKES]->sailing,
pat-[:LIKES]->mtb,
tom-[:LIKES]->rowing,
pat-[:LIKES]->tennis,
chrissy-[:LIKES]->mtb,
chrissy-[:LIKES]->sailing
Can you kindly help me hot to fix this issue
A #WilliamLyon indicated:
A new DB has no nodes, and therefore has no node with the ID of 0.
The START clause is now deprecated.
You are apparently using a very old version of neo4j. If possible, you should install the latest version.
In addition:
Nodes now must always be specified within parentheses.
Try the following, instead, which should work with your version of neo4j as well as the latest versions:
CREATE
(tatham {Name:'Tatham'}),
(tom {Name:'Tom'}),
(pat {Name:'Pat'}),
(chrissy {Name:'Chrissy'}),
(sailing {Name:'Sailing'}),
(mtb {Name:'MTB'}),
(rowing {Name:'Rowing'}),
(tennis {Name:'Tennis'}),
(root)-[:HAS_USER]->(tatham),
(root)-[:HAS_USER]->(tom),
(root)-[:HAS_USER]->(pat),
(root)-[:HAS_USER]->(chrissy),
(tatham)-[:FRIEND]->(tom),
(tom)-[:FRIEND]->(pat),
(tatham)-[:FRIEND]->(chrissy),
(tatham)-[:LIKES]->(sailing),
(tatham)-[:LIKES]->(mtb),
(tom)-[:LIKES]->(sailing),
(pat)-[:LIKES]->(mtb),
(tom)-[:LIKES]->(rowing),
(pat)-[:LIKES]->(tennis),
(chrissy)-[:LIKES]->(mtb),
(chrissy)-[:LIKES]->(sailing);
The root node will be created automatically the first time it is encountered by the query, and then re-used.
The problem is the first line of your Cypher query: start root=node(0). That statement is saying "find a Node with id 0", however if you haven't inserted any data yet there is no Node to find, thus the error.
start has been deprecated and is no longer required so you can just remove it.

Neo4j Java VM Tuning (v2.3 Community)

From what I can tell I'm having an issue with my Neo4j v2.3 Community Java VM adding items to the Old Gen Heap and never being able to Garbage Collecting them.
Here is a detailed outline of the situation.
I have a PHP file which calls the Dropbox Delta API and writes out the file structure to my Neo4j Database. Each call to Delta returns a 2000 Item data sets of which I pull out the information I need, the following is an example of what that query looks like with just one item, usually I send in full batches of 2000 items as it gave me the best results.
***Following is an example Query***
MERGE (c:Cloud { type:'Dropbox', id_user:'15', id_account:''})
WITH c
UNWIND [
{ parent_shared_folder_id:488417928, rev:'15e1d1caa88',.......}
]
AS items MERGE (i:Item { id:items.path, id_account:'', id_user:'15', type:'Dropbox' })
ON Create SET i = { id:items.path, id_account:'', id_user:'15', is_dir:items.is_dir, name:items.name, description:items.description, size:items.size, created_at:items.created_at, modified:items.modified, processed:1446769779, type:'Dropbox'}
ON Match SET i+= { id:items.path, id_account:'', id_user:'15', is_dir:items.is_dir, name:items.name, description:items.description, size:items.size, created_at:items.created_at, modified:items.modified, processed:1446769779, type:'Dropbox'}
MERGE (p:Item {id_user:'15', id:items.parentPath, id_account:'', type:'Dropbox'})
MERGE (p)-[:Contains]->(i)
MERGE (c)-[:Owns]->(i)
***The query is sent via Everyman****
static function makeQuery($client, $qry) {
return new Everyman\Neo4j\Cypher\Query($client, $qry);
}
This works fine and generally from start to finish takes 8-10 seconds to run.
The Dropbox account I am accessing contains around 35000 items, and takes around 18 runs of my PHP to populate my Neo4j Database with the folder/file structure of the dropbox account.
With every run of this PHP, around 50mb of items are added to the Neo4j JVM Old Gen heap, 30mb of that is not removed by GC.
The end result is obviously the VM runs out of memory and gets stuck in a constant state of GC throttling.
I have tried a range of Neo4j VM settings, as well as an update from Neo4j v2.2.5 to v2.3, which actually has appeared to make the problem worse.
My current settings are as follows,
-server
-Xms4096m
-Xmx4096m
-XX:NewSize=3072m
-XX:MaxNewSize=3072m
-XX:SurvivorRatio=1
I am testing on a windows 10 PC with 8GB of ram and an i5 2.5GHz quad core. Java 1.8.0_60
Any info on how to solve this issue would be greatly appreciated.
Cheers, Jack.
Reduce the new size to 1024m
change your settings to:
-server
-Xms4096m
-Xmx4096m
-XX:NewSize=1024m
It is most likely that the size of your tx grows too large.
I recommend sending each of the parents in separately, so instead of the UNWIND sent one statement each.
Make sure to use the new transactional http endpoint, I recommend to go wit neoclient instead of Neo4jPHP
You should also use parameters instead of literal values!!!
And don't repeeat user-id and type etc. properties on every item.
Are you sure you want to connect everything to c not just the root of the directory structure? I would do the latter.
MERGE (c:Cloud:Dropbox { id_user:{userId}})
MERGE (p:Item:Dropbox {id:{parentPath}})
// owning the parent should be good enough
MERGE (c)-[:Owns]->(p)
WITH c
UNWIND {items} as item
MERGE (i:Item:Dropbox { id:item.path})
ON Create SET i += { is_dir:item.is_dir, name:item.name, created_at:item.created_at }
SET i += { description:item.description, size:item.size, modified:items.modified, processed:timestamp()}
MERGE (p)-[:Contains]->(i);
Make sure to use 2.3.0 for best MERGE performance for relationships.

neo4j 2.1.1 How to setup logging to analyze "Unknown Error"

I am trying to load lot of data exported from my SQL Server as csv file in to neo4j using the following query:
USING PERIODIC COMMIT 1000
LOAD CSV WITH HEADERS FROM "file:e:/temp/sql_backup/events.csv" AS csvLine
MERGE (dtStart:Date { Name: REPLACE (SUBSTRING(csvLine.VT_Start,0,10),"-","")})
MERGE (dtEnd:Date { Name: REPLACE (SUBSTRING(csvLine.VT_End,0,10),"-","")})
MERGE (ev:Event { EventId : csvLine.EventID})
ON CREATE SET ev = {
EventId : csvLine.EventID,
TagId : csvLine.TagID,
EventTypeId : csvLine.EventTypeID,
IntervalIdentifier : csvLine.AlarmID,
VT_Start : csvLine.VT_Start ,
VT_End : csvLine.VT_End,
Suppressed : csvLine.Suppressed,
system_messagetypename : csvLine.system_messagetypename,
system_inputname : csvLine.system_inputname,
system_spare1 : csvLine.system_spare1,
Priority : csvLine.Priority,
Console : csvLine.Console,
Operator : csvLine.Operator,
Message : csvLine.Message,
Parameter : csvLine.Parameter,
FromValue : csvLine.FromValue,
ToValue : csvLine.ToValue,
UnitOfMeasure : csvLine.UnitOfMeasure,
Limit : csvLine.Limit,
Value : csvLine.Value,
User1 : csvLine.User1,
BlockName : csvLine.BlockName }
MERGE (al:Alarm{ Name: csvLine.AlarmID})
MERGE (pr:Priority{ Name: csvLine.Priority})
MERGE (con:Console{ Name: csvLine.Console })
MERGE (op:Operator{ Name: csvLine.Operator })
MERGE (prm:Parameter{ Name: csvLine.Parameter })
MERGE (uom:UOM{ Name: csvLine.UnitOfMeasure })
MERGE (user:User{ Name: csvLine.User1 })
MERGE (blk:Block{ Name: csvLine.BlockName })
MERGE (tag:Tag{ TagId: csvLine.TagID })
MERGE (evt:EventType{ EventTypeID: csvLine.EventTypeID })
CREATE UNIQUE (con)<-[:FOR_CONSOLE]-(ev)-[:HAS_ALARM]->(al)
CREATE UNIQUE (op)<-[:FOR_OPERATOR]-(ev)-[:HAS_PRIORITY]->(pr)
CREATE UNIQUE (user)<-[:FOR_USER]-(ev)-[:HAS_UOM]->(uom)
CREATE UNIQUE (blk)<-[:HAS_BLOCK]-(ev)-[:HAS_EVENT_TYPE]->(evt)
CREATE UNIQUE (ev)-[:FOR_TAG]->(tag)
CREATE UNIQUE (dtStart)<-[:DATE_VT_START]-(ev)-[:FOR_TAG]->(tag)-[:DATE_VT_END]->(dtEnd)
I see that this query runs for a while and i get an error like below:
I looked in to the data directory to check if there was any logs there that explains more details, but I can't find any logs generated there as well.
neo4j documents talks about editing conf/neo4j-server.properties but i don't see either a config folder or logs in my neo4j directory as shown below.
only logs I see in that root folder is nioneo_logical.log, tm_tx_log, active_tx_log
Can someone please explain me how to setup debug logging in neo4j 2.1.1 so that I can see whats causing this error. I suspect it could be that my Laptop is running out of available RAM may be, so want to know if anyone think that neo4j crashes when it runs out available of RAM.
Also I noticed that the some data is created before the error is thrown, but no relationships queries are created, so is there anything wrong in the query itself? or its the error that is causing the issue of not creating the relationships.
EDIT 1:
BTW what happened to the neo4j 2.1.1 download? i don't find it in the neo4j site anymore !
EDIT 2:
I downloaded the latest version 2.1.2 and tried to run the query again and landed in the same issue. I think I kind of get the issue, the periodic commit is not useful as it looks to me that query first creates all the event nodes and then starts to run the create associations.
I reduced the size of events to 5000 and i got it to work, but when I increased it to 100000 it crashed again and there were 30000+ events in the database with no associations.
My conclusion is that: Either my query is incorrect, or the way Periodic commits are handled is not correct. This way we will run out of physical RAM when the dataset is large.
Edit 3:
Here is the output from the Shell, where the 1st query runs as it does not have any Create statements.
Same runs from the Browser window though if the file size is not large.
The reason that you cannot find any errors in the logs is that long running queries in the Neo4j browser timeout at 60 seconds. This does not mean that the query actually failed, in fact it will run to completion.
For testing long running queries, please use the Neo4j Shell, which does not have a timeout.
http://docs.neo4j.org/chunked/stable/shell.html

Neo4j: Inserting 7k nodes is slow (Spring Data Neo4j / SpringRestGraphDatabase)

I'm building an application where my users can manage dictionaries. One feature is uploading a file to initialize or update the dictionary's content.
The part of the structure I'm focusing on for a start is Dictionary -[:CONTAINS]->Word.
Starting from an empty database (Neo4j 1.9.4, but also tried 2.0.0M5), accessed via Spring Data Neo4j 2.3.1 in a distributed environment (therefore using SpringRestGraphDatabase, but testing with localhost), I'm trying to load 7k words in 1 dictionary. However I can't get it done in less than 8/9 minutes on a linux with core i7, 8Gb RAM and SSD drive (ulimit raised to 40000).
I've read lots of posts about loading/inserting performance using REST and I've tried to apply the advices I found but without better luck. The BatchInserter tool doesn't seem to be a good option to me due to my application constraints.
Can I hope to load 10k nodes in a matter of seconds rather than minutes ?
Here is the code I came up with, after all my readings :
Map<String, Object> dicProps = new HashMap<String, Object>();
dicProps.put("locale", locale);
dicProps.put("category", category);
Dictionary dictionary = template.createNodeAs(Dictionary.class, dicProps);
Map<String, Object> wordProps = new HashMap<String, Object>();
Set<Word> words = readFile(filename);
for (Word gw : words) {
wordProps.put("txt", gw.getTxt());
Word w = template.createNodeAs(Word.class, wordProps);
template.createRelationshipBetween(dictionary, w, Contains.class, "CONTAINS", true);
}
I resolve such problem by just creating some CSV file and after that read it from Neo4j. It is needed to make such steps:
Write some class which get input data and base on it create CSV file (it can be one file per node kind or even you can create file which will be used to build relation).
In my case I have also create servlet which allow Neo4j to read that file by HTTP.
Create proper Cypher statements which allow to read and parse that CSV file. There are some samples of which I use (if you use Spring Data also remember about labels):
simple one:
load csv with headers from {fileUrl} as line
merge (:UserProfile:_UserProfile {email: line.email})
more complicated:
load csv with headers from {fileUrl} as line
match (c:Calendar {calendarId: line.calendarId})
merge (a:Activity:_Activity {eventId: line.eventId})
on create set a.eventSummary = line.eventSummary,
a.eventDescription = line.eventDescription,
a.eventStartDateTime = toInt(line.eventStartDateTime),
a.eventEndDateTime = toInt(line.eventEndDateTime),
a.eventCreated = toInt(line.eventCreated),
a.recurringId = line.recurringId
merge (a)-[r:EXPORTED_FROM]->c
return count(r)
Try the below
Usw native Neo4j API rather than spring-data-neo4j while performing batch operations.
Commit in batches i.e. may be for each 500 words
NOTE: There are certain properties (type) added by SDN which will be missing when using the native approach.

Neo4j: Java API IndexHits<Node>.size() is 0

I'm trying to use the Java API for Neo4j but I seem to be stuck at IndexHits. If I query the DB with Cypher using
START n=node:types(type="Process") RETURN n;
I get all 2087 nodes of type "Process".
In my application I have the following lines
Index<Node> nodeIndex = db.index().forNodes("types");
IndexHits<Node> hits = nodeIndex.get("type", "Process");
System.out.println("Node index size: " + hits.size());
which leads my console to spit out a value of 0. Here, db is of course an instance of GraphDatabaseService.
I expected an object that included all 2087 nodes. What am I doing wrong?
The .size() question is just the prelude to my iterator
for(Node process : hits) { ... }
but that does not much when hits.size() == 0. According to http://api.neo4j.org/1.9.2/org/neo4j/graphdb/index/IndexHits.html this should be possible, provided there is something in hits.
Thanks in advance for your help.
I figured it out. Man, I feel so embarrassed...
It so happens that I had set up the DB_PATH to my default data folder, whereas the default storage folder is the default data folder plus graph.db. When I tried to run the code from that corrected DB_PATH I got an error saying that a lock file was in place because the Neo4j server was running. After shutting it down it worked perfectly.
So, if you happen to see the following error, just stop the server and run the code again:
Caused by: org.neo4j.kernel.StoreLockException: Could not create lock file
at org.neo4j.kernel.StoreLocker.checkLock(StoreLocker.java:74)
at org.neo4j.kernel.StoreLockerLifecycleAdapter.start(StoreLockerLifecycleAdapter.java:40)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:491)
I found on several forums that you cannot run the Neo4j server and use the Java API to query it at the same time.

Resources