Creating multiple nodes in one neo4j CREATE statement - neo4j

Create (sub:Subscription {name:"Paul",mobile:"8763xxxxx",email:"info#aliant.com"}),
Create (sub:Subscription {name:"Peter",mobile:"87638xxxxx",email:"info#aliant.com"}),
Create (sub:Subscription {name:"James",mobile:"87638xxxxx",email:"info#aliant.com"}),
Create (sub:Subscription {name:"Bill",mobile:"87638xxxxx",email:"info#aliant.com"})
Return sub;
I am very new to Neo4j/Cypher.....Why do I get an "unexpected "C" error on the second Create. I am using 2.3.2 community edition. The manual says this should would work...I also tried the parameter example section 12.1 in the manual it doesn't work either.

The commas are illegal - this form works:
Create (sub1:Subscription {name:"Paul",mobile:"8763xxxxx",email:"info#aliant.com"})
Create (sub2:Subscription {name:"Peter",mobile:"87638xxxxx",email:"info#aliant.com"})
Create (sub3:Subscription {name:"James",mobile:"87638xxxxx",email:"info#aliant.com"})
Create (sub4:Subscription {name:"Bill",mobile:"87638xxxxx",email:"info#aliant.com"})
Return sub1, sub2, sub3, sub4
If the you don't need a value back, then this will just create the nodes:
Create (:Subscription {name:"Paul",mobile:"8763xxxxx",email:"info#aliant.com"})
Create (:Subscription {name:"Peter",mobile:"87638xxxxx",email:"info#aliant.com"})
Create (:Subscription {name:"James",mobile:"87638xxxxx",email:"info#aliant.com"})
Create (:Subscription {name:"Bill",mobile:"87638xxxxx",email:"info#aliant.com"})

Try this:
UNWIND [{name:"Paul",mobile:"8763xxxxx",email:"info#aliant.com"}, {name:"Peter",mobile:"87638xxxxx",email:"info#aliant.com"}] as subscriptions
CREATE (sub:Subscription)
SET sub=subscriptions
Or this:
[Note: This syntax is deprecated in Neo4j version 2.3. It may be removed in a future major release. See the above code using UNWIND for how to achieve the same functionality.
]:
{
"subscriptions" : [ {
"name" : "A",
"email" : "a#b.c"
}, {
"name" : "B",
"email" : "x#y.z"
} ]
}
Create (sub:Subscription: {subscriptions}) Return sub
See, if that helps, or Refer this link.

Related

grails gorm mongodb `like` functionality in criteria

Is like or rlike supported for searching a string in a collection's property value?
Does the collection need to define text type index for this to work? Unfortunately I can not create a text index for the property. There are 100 million documents and text index killed the performance (MongoDB is on single node). If this is not do-able without text index, its fine with me. I will look for alternatives.
Given below collection:
Message {
'payload' : 'XML or JSON string'
//few other properties
}
In grails, I created a Criteria to return me a list of documents which contain a specific string in the payload
Message.list {
projections {
like('payload' : searchString)
}
}
I tried using rlike('payload' : ".*${searchString}.*") as well. It did not result in any doc to me.
Note: I was able to get the document when I fired the native query on Mongo shell.
db.Message.find({payload : { $regex : ".*My search string.*" }}).pretty()
I got it working in a round about way. I believe there is a much better grails solution. Criteria approach did not work. So used the low level API converted the DBObjects to Domain objects.
def query = ['payload' : [ '$regex' : /${searchString}/ ] ]
def dbObjects = Message.collection.find(query).skip(offset).limit(defaultPageSize).toArray()
dbObjects?.collect { new Message(new JsonSlurper().parseText(it.toString()))}

Neo4j Relationships not showing connecting lines in the graph

I created the following nodes and relationship in neo4j
CREATE (United_States:Citizenship { type : “Naturalized”})
CREATE (United_States:Citizenship { type : “Native_Born”})
CREATE (uid:Person { unique_id: 'A23AF39D-BEED-4FFC-B080-1362920FA7A8', id_type: '128bit_UUID' })
MATCH (uid:Person),(Native_Born:Citizenship) WHERE uid:Person="A23AF39D-BEED-4FFC-B080-1362920FA7A8" CREATE (uid) <- [ r:PersonUniqueIdentifier ] -> (Native_Born)
CREATE (fn:Person { first_name:'Willie', id_type:'128bit_UUID'})
CREATE (ln:Person { last_name:'Armstrong', id_type:'128bit_UUID'}))
CREATE CONSTRAINT ON (uid:Person) ASSERT Person.unique_id IS UNIQUE
CREATE INDEX ON :Person(unique_id)
I do not see the 'PersonUniqueIdentifier' Relation between the Citizenship node and id:Person node on the graph.
Screen shot of graph
Firstly, I would make a habit of doing the indexes/constraints first. There's not a lot of data here, but if you add an index after adding the data, it will need to go through all your nodes first. Also, creating a constraint also adds an index for you, so no need for that line. It seems like you're mixing up variables here, so refactoring a bit:
CREATE CONSTRAINT ON (person:Person) ASSERT person.unique_id IS UNIQUE
Also your Citizenship CREATEs are using the same variable name. I don't know if that would necessarily cause a problem, but it's simpler to do this anyway:
CREATE (:Citizenship { type : “Naturalized”}), (:Citizenship { type : “Native_Born”})
This statement looks fine to me (though, again, you could lose the variable if you wanted to):
CREATE (person:Person { unique_id: 'A23AF39D-BEED-4FFC-B080-1362920FA7A8', id_type: '128bit_UUID' })
Here there are a few problems. Here's how I would refactor it:
MATCH (person:Person),(citizenship:Citizenship)
WHERE
person.unique_id="A23AF39D-BEED-4FFC-B080-1362920FA7A8",
citizenship.type = 'Native_Born'
CREATE (person)-[:HAS_CITIZENSHIP]->(citizenship)
I'm not really sure what you want to do here. It seems like you want to create one person, so I would do this:
CREATE (:Person { first_name:'Willie', id_type: '128bit_UUID', last_name:'Armstrong'})

py2neo create function creating duplicate nodes

I have a Neo4j database containing information on Congressmen. The problem I'm having is if there is a vacant position. When this happens I am using the same key:value in the "Congressmen" index. I tried the code below because in the py2neo documentation it states that the add function is idempotent
#Check if we have any vacancies and if so if they match the one that we currently want to add
query="start n=node:Congressmen('website:N/A') return n"
result= cypher.execute(graph_db, query.encode('utf-8', errors='ignore'))
#Match what we already have
if str(result[0]) != "[]":
#create is idempotent so will only create a new node if properties are different
rep, = graph_db.create({"name" : userName, "website" : web, "district" : int(district), "state" : child[2].text, "party" : child[4].text, "office" : child[5].text, "phone" : child[6].text, "house" : "House of Representatives"})
cong = graph_db.get_or_create_index(neo4j.Node, "Congressmen")
# add the node to the index
cong.add("website", web, rep)
When I checked the interface after running the code 3 times I have duplicate nodes.
Is it possible to prevent the nodes from duplicating and still be able to index them using the same key/value?
The Index.add method is certainly idempotent: the same entity can only be added once to a particular entry point. The GraphDatabaseService.create method is not however. Each time you run the create method, a new node is created and each run of add appends that new node to the index. You probably want to use the Index.add_if_none, Index.create_if_none or Index.get_or_create method instead.

Elastic Search: how to see the indexed data

I had a problem with ElasticSearch and Rails, where some data was not indexed properly because of attr_protected. Where does Elastic Search store the indexed data? It would be useful to check if the actual indexed data is wrong.
Checking the mapping with Tire.index('models').mapping does not help, the field is listed.
Probably the easiest way to explore your ElasticSearch cluster is to use elasticsearch-head.
You can install it by doing:
cd elasticsearch/
./bin/plugin -install mobz/elasticsearch-head
Then (assuming ElasticSearch is already running on your local machine), open a browser window to:
http://localhost:9200/_plugin/head/
Alternatively, you can just use curl from the command line, eg:
Check the mapping for an index:
curl -XGET 'http://127.0.0.1:9200/my_index/_mapping?pretty=1'
Get some sample docs:
curl -XGET 'http://127.0.0.1:9200/my_index/_search?pretty=1'
See the actual terms stored in a particular field (ie how that field has been analyzed):
curl -XGET 'http://127.0.0.1:9200/my_index/_search?pretty=1' -d '
{
"facets" : {
"my_terms" : {
"terms" : {
"size" : 50,
"field" : "foo"
}
}
}
}
More available here: http://www.elasticsearch.org/guide
UPDATE : Sense plugin in Marvel
By far the easiest way of writing curl-style commands for Elasticsearch is the Sense plugin in Marvel.
It comes with source highlighting, pretty indenting and autocomplete.
Note: Sense was originally a standalone chrome plugin but is now part of the Marvel project.
Absolutely the easiest way to see your indexed data is to view it in your browser. No downloads or installation needed.
I'm going to assume your elasticsearch host is http://127.0.0.1:9200.
Step 1
Navigate to http://127.0.0.1:9200/_cat/indices?v to list your indices. You'll see something like this:
Step 2
Try accessing the desired index:
http://127.0.0.1:9200/products_development_20160517164519304
The output will look something like this:
Notice the aliases, meaning we can as well access the index at:
http://127.0.0.1:9200/products_development
Step 3
Navigate to http://127.0.0.1:9200/products_development/_search?pretty to see your data:
ElasticSearch data browser
Search, charts, one-click setup....
Aggregation Solution
Solving the problem by grouping the data - DrTech's answer used facets in managing this but, will be deprecated according to Elasticsearch 1.0 reference.
Warning
Facets are deprecated and will be removed in a future release. You are encouraged to
migrate to aggregations instead.
Facets are replaced by aggregates - Introduced in an accessible manner in the Elasticsearch Guide - which loads an example into sense..
Short Solution
The solution is the same except aggregations require aggs instead of facets and with a count of 0 which sets limit to max integer - the example code requires the Marvel Plugin
# Basic aggregation
GET /houses/occupier/_search?search_type=count
{
"aggs" : {
"indexed_occupier_names" : { <= Whatever you want this to be
"terms" : {
"field" : "first_name", <= Name of the field you want to aggregate
"size" : 0
}
}
}
}
Full Solution
Here is the Sense code to test it out - example of a houses index, with an occupier type, and a field first_name:
DELETE /houses
# Index example docs
POST /houses/occupier/_bulk
{ "index": {}}
{ "first_name": "john" }
{ "index": {}}
{ "first_name": "john" }
{ "index": {}}
{ "first_name": "mark" }
# Basic aggregation
GET /houses/occupier/_search?search_type=count
{
"aggs" : {
"indexed_occupier_names" : {
"terms" : {
"field" : "first_name",
"size" : 0
}
}
}
}
Response
Response showing the relevant aggregation code. With two keys in the index, John and Mark.
....
"aggregations": {
"indexed_occupier_names": {
"buckets": [
{
"key": "john",
"doc_count": 2 <= 2 documents matching
},
{
"key": "mark",
"doc_count": 1 <= 1 document matching
}
]
}
}
....
A tool that helps me a lot to debug ElasticSearch is ElasticHQ. Basically, it is an HTML file with some JavaScript. No need to install anywhere, let alone in ES itself: just download it, unzip int and open the HTML file with a browser.
Not sure it is the best tool for ES heavy users. Yet, it is really practical to whoever is in a hurry to see the entries.
Kibana is also a good solution. It is a data visualization platform for Elastic.If installed it runs by default on port 5601.
Out of the many things it provides. It has "Dev Tools" where we can do your debugging.
For example you can check your available indexes here using the command
GET /_cat/indices
If you are using Google Chrome then you can simply use this extension named as Sense it is also a tool if you use Marvel.
https://chrome.google.com/webstore/detail/sense-beta/lhjgkmllcaadmopgmanpapmpjgmfcfig
Following #JanKlimo example, on terminal all you have to do is:
to see all the Index:
$ curl -XGET 'http://127.0.0.1:9200/_cat/indices?v'
to see content of Index products_development_20160517164519304:
$ curl -XGET 'http://127.0.0.1:9200/products_development_20160517164519304/_search?pretty=1'

Elastic Search - implement "Did you Mean"

We are trying to use Elastic Search in a Rails app and would like any input/code example on the implementation of "did you mean" feature. Essentially, we want to provide the end user an option to search for an alternate query like in google.
As of version 0.90.0.Beta1, ElasticSearch has a "term suggest" feature included, which is what you are looking for:
http://www.elasticsearch.org/guide/reference/api/search/term-suggest/
E.g. get from this query: "devloping distibutd saerch engies"
this result: "developing distributed search engines"
Elasticsearch doesn't have it yet, it is opened as issue here basically it is waiting for the next Lucene release.
I achieved a similar "did you mean" behaviour using the phonetic analyzers, which worked for my use case, location names, that is not gonna work for all use cases....
a example mapping:- https://gist.github.com/1171014
so you can query using the REST api like this (mispelled london):-
{
"query": {
"field": {
"nameSounds": "lundon"
}
}
}
You can use fuzzy search:
"fuzzy" : {
"user" : {
"value" : "Jon",
"boost" : 1.0,
"fuzziness" : 3,
"prefix_length" : 0,
"max_expansions": 100
}
}
Check this link for fuzzy : http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl-fuzzy-query.html

Resources