I followed the following post to test facebook friends of friends in Neo4j 2.0.1
http://blog.neo4j.org/2013/06/fun-with-facebook-in-neo4j_19.html
I am able to create the nodes successfully.. Auto Indexing is enabled
Here is the create node statement - create (n{name:'User 123', type:'Facebook'});
This works fine
When I create the relationships, I am getting this notification: "Nothing was created and No data Returned"
Here is the create Relationship statement
start n1=node:node_auto_index(name='User 123'),n2=node:node_auto_index(name='User XYZ') CREATE n1-[:IS_A_FRIEND_OF]->n2;
Any help is very much appreciated. I am new to neo4j and trying to get my hands dirty by learning some stuff.
Neo4j 2.0 has a new feature called schema indexes. For most use cases it's beneficial to use schema indexing instead of autoindexing.
For your example, I'd move the value of the type property to become a label.
First, create the index for property name based on label Facebook:
CREATE INDEX ON :Facebook(name)
The CREATE looks like:
CREATE (n:Facebook {name:'User 123'})
For creating the relationships use:
MATCH (n1:Facebook {name:'User 123'}),n2=(n2:Facebook {name:'User XYZ'})
CREATE n1-[:IS_A_FRIEND_OF]->n2
You might also look into Neo4j 2.0's new MERGE statement.
Related
The CREATE INDEX <indexName> command is not idempotent and will cause an error if the given index already exists. I'm new to neo4j, and can't find a predicate that avoids this error. I've tried various permutations of ANY(...), and they all barf at "db.indexes()".
Since CREATE INDEX ... fails if the index exists and DROP INDEX ... fails if it doesn't, I don't know how to write a .cypher file that creates the index only if needed.
A short form might be something like CREATE INDEX indexName FOR (c:SomeLabel) ON (c.someProperty) IF NOT EXISTS, but of course that short form doesn't exist.
Is there some way to do this with a predicate, subquery or some such expression?
As of Neo4j 4.1.3, a new index creation syntax has been introduced to do just that
CREATE INDEX myIndex IF NOT EXISTS FOR (t:Test) ON (t.id)
Indexes for search performance
You can use the apoc.schema.node.indexExists function to check whether an index exists before creating it.
For example, this query will create the :Foo(id) index if it does not already exist:
WITH 1 AS ignored
WHERE NOT apoc.schema.node.indexExists('Foo', ['id'])
CALL db.createIndex('index_name', ['Foo'], ['id'], 'native-btree-1.0') YIELD name, labels, properties
RETURN name, labels, properties
For some reason, the Cypher planner currently is not able to parse the normal CREATE INDEX index_name ... syntax after the above WHERE clause, so this query uses the db.createIndex procedure instead.
There is also a much more powerful APOC procedure, apoc.schema.assert, but it may be overkill for your requirements.
By default, the command is ignored if the index exists.
Can you test the following?
CREATE (n:Car {id: 1});
Added 1 label, created 1 node, set 1 property, completed after 23 ms.
CREATE INDEX ON :Car (id);
1st execution: Added 1 index, completed after 6 ms.
2nd execution : (no changes, no records)
I tried both suggestions, and neither solves my issue. I don't have time to discover, through trial-and-error, how to install APOC in my environment.
The first line of mbh86's answer is inaccurate, at least in my system. The command is not ignored, it fails with an error. So if anything else is in the same cypher script, it will fail.
The best I can do is apparently to wrap the CREATE INDEX in a command-line string, run that string from either a bash or python script, run it, and check the return code from the calling program.
I appreciate the effort of both commentators, and I didn't want to leave either hanging.
I am trying to query a multiple nested object with Falcor. I have an user which has beside other the value follower which itself has properties like name.
I want to query the name of the user and the first 10 follower.
My Falcor server side can be seen on GitHub there is my router and resolver.
I query the user with user["KordonDev"]["name", "stars"]. And the follower with user["KordonDev"].follower[0.10]["name", "stars"].
The route for follower is user[{keys:logins}].follower[{integers:indexes}] but this doesn't catch the following query.
I tried to add it as string query.
user["KordonDev"]["name", "stars", "follower[0..10].name"] doesn't work.
The second try was to query with arrays of keys. ["user", "KordonDev", "follower", {"from":0, "to":10}, "name"] but here I don't know how to query the name of the user.
As far as I know and looking on the path parser. There is no way to do nested queries.
What you want to do is batch the query and do two queries.
user["KordonDev"]["name", "stars"]
user["KordonDev"]["follower"][0..10].name
It seems that falcor does not support this, there is even a somewhat old issue discussing how people trying to do nested queries.
to the point about the current syntax leading people to try this:
['lolomo', 0, 0, ['summary', ['item', 'summary']]]
I can see folks trying to do the same thing with the new syntax:
"lolomo[0][0]['summary', 'item.summary']"
As soon as they know they can do:
"lolomo[0][0]['summary', 'evidence']"
So it seems deep nested queries is not a functionality.
I try to create an index in Neo4j, but it seems like it is not working. I insert data with the following codes snippet.
create index on :`Person`(`name`)
create (_0:`Person` {`name`:"Andres"})
create (_1:`Person` {`name`:"Mark"})
create _0-[:`KNOWS`]->_1
The code here works fine. But when I try to fetch data with cypher command
START n=node:name(name= 'Bob')
RETURN n
I've got an error
Index `name` does not exist
Neo.ClientError.Schema.NoSuchIndex
But as you can see above, I declare an index name. What do I query wrong?
either you must use automatic index - http://docs.neo4j.org/chunked/milestone/auto-indexing.html - where you first specify in the neo4j config file which parameters would be indexed (than start/restart the server)
or when using manual indexing - http://docs.neo4j.org/chunked/milestone/indexing-add.html - you must include each new node into the index manualy like this:
MATCH (n:Person)
USING INDEX n:Person(name)
WHERE n.name = 'Bob'
RETURN n
view also neo4j cypher : unable to create and use an index
I've created an INDEX using cypher for my :Person label, but I cannot find any way of printing out a list of indexes or constraints available to my Neo4j system.
Is this something that is doable via Cypher?
As Eve pointed out, you can get labels by calling CALL.Labels(). To get indexes just do:
CALL db.indexes()
Also if you do CALL db. in your neo4j browser you will see all the functions available.
In browser you can use :schema or schema in the shell to print out all the indexes and constraints.
Nope. There's not even a way to list labels:
https://github.com/neo4j/neo4j/issues/1287
There are some REST calls for this, and the undocumented schema command in neo4j-shell is handy.
Edit: Update for 3.0 with the new stored procedures!
CALL db.labels()
(Applicable to neo4j version 2.3.1 or later)
To get indexes via REST use this:
curl http://localhost:7474/db/data/schema/index/
In the neo4j console you can run the :schema command to get all indexes & constraints.
I am trying to figure out a way to keep my mysql db and elasticsearch db in sync. I have setup a jdbc river using the jprante / elasticsearch-river-jdbc plugin for elasticsearch. When I execute the below request:
curl -XPUT 'localhost:9200/_river/my_jdbc_river/_meta' -d '{
"type" : "jdbc",
"jdbc" : {
"driver" : "com.mysql.jdbc.Driver",
"url" : "jdbc:mysql://localhost:3306/MY-DATABASE",
"user" : "root",
"password" : "password",
"sql" : "select * from users",
"poll" : "1m"
},
"index" : {
"index" : "test_index",
"type" : "user"
}
}'
the river starts indexing data, but for some records I get org.elasticsearch.index.mapper.MapperParsingException. Well there is discussion related to this issue here, but I want to know a way to get around this issue.
Is it possible to permanently fix this by creating an explicit mapping for all 'fields' of the 'type' that I am trying to index or is there a better way to solve this issue?
Another question that I have is, when the jdbc-river polls the database again, it seems to re-index the entire data-set(given in sql query) again into ES. I am not sure, but is this done because elasticsearch wants to add fresh data as well as update any changes in the existing data? Is it possible to index only the fresh data, if the table's data is static?
Did you look at default mapping?
http://www.elasticsearch.org/guide/reference/mapping/dynamic-mapping.html
I think it can help you here.
If you have an insertion date field in your datatable, you can use it to filter what you have to index.
See https://github.com/jprante/elasticsearch-river-jdbc#time-based-selecting
HTH
David
Elastic Search has dropped the river sync concept at all. It is not a recommended path, because usually it doesn't make sense to keep same normalized SQL table structure in document store like Elastic Search.
Say, you have Product as an entity with some attributes, and Reviews on Product entity as a parent child table as Reviews could be multiple on same table.
Products(Id, name, status,... etc)
Product_reviewes(product_id, review_id)
Reviews(id, note, rating,... etc)
In document store you may want to create a single Index with name say product that includes Product{attribute1, attribute1,... Product reviews[review1, review2,...]}
Here is approach of syncing in such setup.
Assumption:
SQL Database(True Source of record)
Elastic Search or any other NoSql Document Store
Solution:
As soon as Update/updates happens in Publish event/events in JMS/AMQP/Database Queue/File System Queue/Amazon SQS etc. either full Product or primary object ID(I would recommend just ID)
Queue consumer should then call the Web Service to get full object if only Primary ID is pushed to Queue or just take the object it self and send the respective changes to Elastic search/NoSQL database.