I just created a new folder name-Test and started Neo4j server.
When i run the below script, i get the error - "Neo.ClientError.Statement.EntityNotFound"
and a message "Node with id 0"
start root=node(0)
create
(tatham {Name:'Tatham'}),
(tom {Name:'Tom'}),
(pat {Name:'Pat'}),
(chrissy {Name:'Chrissy'}),
(sailing {Name:'Sailing'}),
(mtb {Name:'MTB'}),
(rowing {Name:'Rowing'}),
(tennis {Name:'Tennis'}),
root-[:HAS_USER]->tatham,
root-[:HAS_USER]->tom,
root-[:HAS_USER]->pat,
root-[:HAS_USER]->chrissy,
tatham-[:FRIEND]->tom,
tom-[:FRIEND]->pat,
tatham-[:FRIEND]->chrissy,
tatham-[:LIKES]->sailing,
tatham-[:LIKES]->mtb,
tom-[:LIKES]->sailing,
pat-[:LIKES]->mtb,
tom-[:LIKES]->rowing,
pat-[:LIKES]->tennis,
chrissy-[:LIKES]->mtb,
chrissy-[:LIKES]->sailing
Can you kindly help me hot to fix this issue
A #WilliamLyon indicated:
A new DB has no nodes, and therefore has no node with the ID of 0.
The START clause is now deprecated.
You are apparently using a very old version of neo4j. If possible, you should install the latest version.
In addition:
Nodes now must always be specified within parentheses.
Try the following, instead, which should work with your version of neo4j as well as the latest versions:
CREATE
(tatham {Name:'Tatham'}),
(tom {Name:'Tom'}),
(pat {Name:'Pat'}),
(chrissy {Name:'Chrissy'}),
(sailing {Name:'Sailing'}),
(mtb {Name:'MTB'}),
(rowing {Name:'Rowing'}),
(tennis {Name:'Tennis'}),
(root)-[:HAS_USER]->(tatham),
(root)-[:HAS_USER]->(tom),
(root)-[:HAS_USER]->(pat),
(root)-[:HAS_USER]->(chrissy),
(tatham)-[:FRIEND]->(tom),
(tom)-[:FRIEND]->(pat),
(tatham)-[:FRIEND]->(chrissy),
(tatham)-[:LIKES]->(sailing),
(tatham)-[:LIKES]->(mtb),
(tom)-[:LIKES]->(sailing),
(pat)-[:LIKES]->(mtb),
(tom)-[:LIKES]->(rowing),
(pat)-[:LIKES]->(tennis),
(chrissy)-[:LIKES]->(mtb),
(chrissy)-[:LIKES]->(sailing);
The root node will be created automatically the first time it is encountered by the query, and then re-used.
The problem is the first line of your Cypher query: start root=node(0). That statement is saying "find a Node with id 0", however if you haven't inserted any data yet there is no Node to find, thus the error.
start has been deprecated and is no longer required so you can just remove it.
Related
I have this code:
erlang code
run it:
output
other nodes is :
other node
why Zzh1 is pang?
Ps. sorry I can not post pic...
As We can see, even though cookie in both the nodes seems to be same, nodes() is returning [].
Can you please give a try with following commands.
erlang:set_cookie('test#vmailpush',<< the same cookie>>).
net_adm:ping('test#vmailpush').
Please run this command from the node where you are trying to ping i.e newmc#vmailpush
In your printout, nodes() returned an empty list - the nodes are not connected. net_adm:names() queries the local epmd daemon, which shows that the test node is registered there, but it says nothing about connectedness. It could simply be that your nodes have different cookies and refuse to connect.
My simple code is retrieving attributes from nodes in neo4j.
results = graph.cypher.execute("MATCH (m)-[:AB]->(a) "
"RETURN m.searchField as origin, a.searchField as destination "
"LIMIT {limit}", {"limit": 100})
nodes = []
rels = []
i = 0
for r in results:
print (r)
ent1 = {"title": r.origin, "label": "entity"}
but the server returns "NameError("global name 'searchField' is not defined",)" Certainly I missed something, but I'm puzzled that the searchField inside the Cypher query is the object of the error.
This is still with py2neo 2.0.8.
Thanks for any pointer, hj
Later editing:
Thanks for taking the time to look at this question. Two elements further puzzle me in this error:
1. The query in cypher is fine, and returns the result I expect in neo4j-shell without problem
2. This code seems to work fine when I run bottle as standalone (run(port=8080) in main), but fails when I run it as wsgi under an apache server. I am wondering if it is a problem of running user, or of context in some part of the code.
Do you have a property called searchField on the node(s)?
If not, the query will fail.
BTW, it is easier to use a string for the query like so:
query = '''
MATCH (m)-[:AB]->(a)
RETURN m.searchField as origin, a.searchField as destination
LIMIT {limit}
'''
result = graph.cypher.execute(query, limit='foo')
Got it to work! It was unrelated to code, but I did not know that any refresh of a new python code served through wsgi requires an apache reload at least.
sudo service apache2 reload
With that I obtain the same (and correct) behavior as with the direct server. The error was the result of an old version of the code... newbie mistake!
Thanks and sorry for the hassle, hj
I've watched Nicole White's awesome youtube “Using LOAD CSV in the Real World” and decided to re-create the neo4j data using the same method.
I’ve cloned her git repo on this subject and have been working this example on the community version of neo4j on my Mac.
I’m stepping thru the load.cql file one command at a time pasting each command into the command window.
Things are going pretty good- I’ve got a bunch of nodes created. To deal with
null values for sub_products and sub_issues in the master file, I created
two other csv files: sub_issues.csv and sub_products.csv as described in the video.
But when I try reading ether these files, I get "(no changes, no rows)”
somehow I get the impression there is something wrong…
below is the actual command sequence I used for the incremental read.
// Load.
USING PERIODIC COMMIT
LOAD CSV WITH HEADERS
FROM 'file:///Volumes/microSD/neo4j-complaints/sub_issue.csv' AS line
WITH line
WHERE line.`Sub-issue` <> '' AND
line.`Sub-issue` IS NOT NULL
MATCH (complaint:Complaint { id: TOINT(line.`Complaint ID`) })
MATCH (complaint)-[:WITH]->(issue:Issue)
MERGE (subIssue:SubIssue { name: UPPER(line.`Sub-issue`) })
MERGE (subIssue)-[:IN_CATEGORY]->(issue)
CREATE (complaint)-[:WITH]->(subIssue)
;
Strip out some of the later statements and do a "RETURN identifier1, identifier2" etc. to see what the engine is doing.
I removed (or decommisioned, can't remember) a DSE analytics node (with IP 10.14.5.50) a couple of months ago. When I now try to execute a dse shark (CREATE TABLE ccc AS SELECT ...) query I now receiving:
15/01/22 13:23:17 ERROR parse.SharkSemanticAnalyzer: org.apache.hadoop.hive.ql.parse.SemanticException: 0:0 Error creating temporary folder on: cfs://10.14.5.50/user/hive/warehouse/mykeyspace.db. Error encountered near token 'TOK_TMP_FILE'
at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:1256)
at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:1053)
at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:8342)
at shark.parse.SharkSemanticAnalyzer.analyzeInternal(SharkSemanticAnalyzer.scala:105)
at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:284)
at shark.SharkDriver.compile(SharkDriver.scala:215)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:342)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:977)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:888)
at shark.SharkCliDriver.processCmd(SharkCliDriver.scala:347)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413)
at shark.SharkCliDriver$.main(SharkCliDriver.scala:240)
at shark.SharkCliDriver.main(SharkCliDriver.scala)
Caused by: java.lang.RuntimeException: java.io.IOException: Error connecting to node 10.14.5.50:9160 with strategy STICKY.
at org.apache.hadoop.hive.ql.Context.getScratchDir(Context.java:216)
at org.apache.hadoop.hive.ql.Context.getExternalScratchDir(Context.java:270)
at org.apache.hadoop.hive.ql.Context.getExternalTmpFileURI(Context.java:363)
at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:1253)
... 12 more
I guess the above error is due to my keyspace referring to the old node:
shark> DESCRIBE DATABASE mykeyspace;
OK
mykeyspace cfs://10.14.5.50/user/hive/warehouse/mykeyspace.db
Time taken: 0.997 seconds
Is there any way for me to fix this incorrect database path?
Tried (but failed) workaround to recreate the database: In cqlsh I created a keyspace thekeyspace and added a table thetable. I the opened up dse hive (and noticed that DESCRIBE DATABASE thekeyspace is giving me a correct cfs path). However, I am unable to drop the the database using DROP DATABASE thekeyspace.
Additional information:
I have no external tables in my keyspace.
Making the SELECT against the tables works.
Setting -hiveconf cassandra.host=WORKING_NODE_IP does not help.
The following commands return proper IP:s (ie. not X.X.X.50):
dsetool listjt
dsetool jobtracker
dsetool sparkmaster
I am getting the same error when I execute the query using dse hive.
No Shark variable is referring to X.X.X.50 when I execute set; in its REPL.
I am running DSE 4.5.
Stumbled across this page that says you need to TRUNCATE "HiveMetaStore"."MetaStore" (in cqlsh) after removing Hive nodes. That did the trick.
I'm trying to use the Java API for Neo4j but I seem to be stuck at IndexHits. If I query the DB with Cypher using
START n=node:types(type="Process") RETURN n;
I get all 2087 nodes of type "Process".
In my application I have the following lines
Index<Node> nodeIndex = db.index().forNodes("types");
IndexHits<Node> hits = nodeIndex.get("type", "Process");
System.out.println("Node index size: " + hits.size());
which leads my console to spit out a value of 0. Here, db is of course an instance of GraphDatabaseService.
I expected an object that included all 2087 nodes. What am I doing wrong?
The .size() question is just the prelude to my iterator
for(Node process : hits) { ... }
but that does not much when hits.size() == 0. According to http://api.neo4j.org/1.9.2/org/neo4j/graphdb/index/IndexHits.html this should be possible, provided there is something in hits.
Thanks in advance for your help.
I figured it out. Man, I feel so embarrassed...
It so happens that I had set up the DB_PATH to my default data folder, whereas the default storage folder is the default data folder plus graph.db. When I tried to run the code from that corrected DB_PATH I got an error saying that a lock file was in place because the Neo4j server was running. After shutting it down it worked perfectly.
So, if you happen to see the following error, just stop the server and run the code again:
Caused by: org.neo4j.kernel.StoreLockException: Could not create lock file
at org.neo4j.kernel.StoreLocker.checkLock(StoreLocker.java:74)
at org.neo4j.kernel.StoreLockerLifecycleAdapter.start(StoreLockerLifecycleAdapter.java:40)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:491)
I found on several forums that you cannot run the Neo4j server and use the Java API to query it at the same time.