I am new to Neo4j, and I read the documentation for traverse part by REST, there is an example here:
http://neo4j.com/docs/milestone/rest-api-traverse.html#rest-api-traversal-using-a-return-filter
{
"order" : "breadth_first",
"return_filter" : {
"body" : "position.endNode().getProperty('name').toLowerCase().contains('t')",
"language" : "javascript"
},
Is anybody can tell me what I can find the information about position, endNode(),getProperty...., it likes an embedded javascript function, but i do not know the meaning of it.
Thanks.
To quote the Traversals documentation:
The position object in the body of the return_filter and
prune_evaluator is a Path object representing the path from the start
node to the current traversal position.
You can start with the JavaDoc for Path.endNode() to understand how to interpret the return_filter.
[ADDENDUM, to answer a comment]
If you want to filter the traversal by label, you can use Node.hasLabel(), like this:
"return_filter" : {
"body" : "position.endNode().hasLabel(DynamicLabel.label('t'))",
"language" : "javascript"
}
Related
I have a question about the query based on the predefined constraints in PopotoJs. In this example, the graph can be filtered based on the constraints defined in the search boxes. The sample file in this example visualizations folder, constraint is only defined for "Person" node. It is specified in the sample html file like the following:
"Person": {
"returnAttributes": ["name", "born"],
"constraintAttribute": "name",
// Return a predefined constraint that can be edited in the page.
"getPredefinedConstraints": function (node) {
return personPredefinedConstraints;
},
....
In my graph I would like to apply that query function for more than one node. For example I have 2 nodes: Contact (has "name" attribute) and Delivery (has "address" attribute)
I succeeded it by defining two functions for each nodes. However, I also had to put two search box forms with different input id (like constraint1 and constraint2). And I had to make the queries in the associated search boxes.
Is there a way to make queries which are defined for multiple nodes in one search box? For example searching Contact-name and/or Delivery-adress in the same search box?
Thanks
First I’d like to specify that the predefined constraints feature is still experimental (but fully functional) and doesn’t have any documentation yet.
It is intended to be used in configuration to filter data displayed in nodes and in the example the use of search boxes is just to show dynamically how it works.
A common use of this feature would be to add the list of predefined constraint you want in the configuration for every node types.
Let's take an example:
With the following configuration example the graph will be filtered to show only Person nodes having "born" attribute and only Movie nodes with title in the provided list:
"Person": {
"getPredefinedConstraints": function (node) {
return ["has($identifier.born)"];
},
...
}
"Movie": {
"getPredefinedConstraints": function (node) {
return ["$identifier.title IN [\"The Matrix\", \"The Matrix Reloaded\", \"The Matrix Revolutions\"]"];
},
...
}
The $identifier variable is then replaced during query generation with the corresponding node identifier. In this case the generated query would look like this:
MATCH (person:`Person`) WHERE has(person.born) RETURN person
In your case if I understood your question correctly you are trying to use this feature to implement a search box to filter the data. I'm still working on that feature but it won't be available soon :(
This is a workaround but maybe it could work in your use case, you could keep the search box value in a variable:
var value = d3.select("#constraint")[0][0].value;
inputValue = value;
Then use it in the predefined constraint of all the nodes type you want.
In this example Person will be filtered based on the name attribute and Movie on title:
"Person": {
"getPredefinedConstraints": function (node) {
if (inputValue) {
return ["$identifier.name =~ '(?i).*" + inputValue + ".*'"];
} else {
return [];
}
},
...
}
"Movie": {
"getPredefinedConstraints": function (node) {
if (inputValue) {
return ["$identifier.title =~ '(?i).*" + inputValue + ".*'"];
} else {
return [];
}
},
...
}
Everything is in the HTML page of this example so you can view the full source directly on the page.
#Popoto, thanks for the descriptive reply. I tried your suggestion and it worked pretty much well. With the actual codes, when I make a query it was showing only the queried node and make the other node amount zero. I wanted to make a query which queries only the related node while the number of other nodes are still same.
I tried a temporary solution for my problem. What I did is:
Export the all the node data to JSON file, search my query constraint in the exported JSONs, if the file is existing in JSON, then run the query in the related node; and if not, do nothing.
With that way, of course I needed to define many functions with different variable names (as much as the node amount). Anyhow, it is not a propoer way, bu it worked for now.
Is like or rlike supported for searching a string in a collection's property value?
Does the collection need to define text type index for this to work? Unfortunately I can not create a text index for the property. There are 100 million documents and text index killed the performance (MongoDB is on single node). If this is not do-able without text index, its fine with me. I will look for alternatives.
Given below collection:
Message {
'payload' : 'XML or JSON string'
//few other properties
}
In grails, I created a Criteria to return me a list of documents which contain a specific string in the payload
Message.list {
projections {
like('payload' : searchString)
}
}
I tried using rlike('payload' : ".*${searchString}.*") as well. It did not result in any doc to me.
Note: I was able to get the document when I fired the native query on Mongo shell.
db.Message.find({payload : { $regex : ".*My search string.*" }}).pretty()
I got it working in a round about way. I believe there is a much better grails solution. Criteria approach did not work. So used the low level API converted the DBObjects to Domain objects.
def query = ['payload' : [ '$regex' : /${searchString}/ ] ]
def dbObjects = Message.collection.find(query).skip(offset).limit(defaultPageSize).toArray()
dbObjects?.collect { new Message(new JsonSlurper().parseText(it.toString()))}
I had a problem with ElasticSearch and Rails, where some data was not indexed properly because of attr_protected. Where does Elastic Search store the indexed data? It would be useful to check if the actual indexed data is wrong.
Checking the mapping with Tire.index('models').mapping does not help, the field is listed.
Probably the easiest way to explore your ElasticSearch cluster is to use elasticsearch-head.
You can install it by doing:
cd elasticsearch/
./bin/plugin -install mobz/elasticsearch-head
Then (assuming ElasticSearch is already running on your local machine), open a browser window to:
http://localhost:9200/_plugin/head/
Alternatively, you can just use curl from the command line, eg:
Check the mapping for an index:
curl -XGET 'http://127.0.0.1:9200/my_index/_mapping?pretty=1'
Get some sample docs:
curl -XGET 'http://127.0.0.1:9200/my_index/_search?pretty=1'
See the actual terms stored in a particular field (ie how that field has been analyzed):
curl -XGET 'http://127.0.0.1:9200/my_index/_search?pretty=1' -d '
{
"facets" : {
"my_terms" : {
"terms" : {
"size" : 50,
"field" : "foo"
}
}
}
}
More available here: http://www.elasticsearch.org/guide
UPDATE : Sense plugin in Marvel
By far the easiest way of writing curl-style commands for Elasticsearch is the Sense plugin in Marvel.
It comes with source highlighting, pretty indenting and autocomplete.
Note: Sense was originally a standalone chrome plugin but is now part of the Marvel project.
Absolutely the easiest way to see your indexed data is to view it in your browser. No downloads or installation needed.
I'm going to assume your elasticsearch host is http://127.0.0.1:9200.
Step 1
Navigate to http://127.0.0.1:9200/_cat/indices?v to list your indices. You'll see something like this:
Step 2
Try accessing the desired index:
http://127.0.0.1:9200/products_development_20160517164519304
The output will look something like this:
Notice the aliases, meaning we can as well access the index at:
http://127.0.0.1:9200/products_development
Step 3
Navigate to http://127.0.0.1:9200/products_development/_search?pretty to see your data:
ElasticSearch data browser
Search, charts, one-click setup....
Aggregation Solution
Solving the problem by grouping the data - DrTech's answer used facets in managing this but, will be deprecated according to Elasticsearch 1.0 reference.
Warning
Facets are deprecated and will be removed in a future release. You are encouraged to
migrate to aggregations instead.
Facets are replaced by aggregates - Introduced in an accessible manner in the Elasticsearch Guide - which loads an example into sense..
Short Solution
The solution is the same except aggregations require aggs instead of facets and with a count of 0 which sets limit to max integer - the example code requires the Marvel Plugin
# Basic aggregation
GET /houses/occupier/_search?search_type=count
{
"aggs" : {
"indexed_occupier_names" : { <= Whatever you want this to be
"terms" : {
"field" : "first_name", <= Name of the field you want to aggregate
"size" : 0
}
}
}
}
Full Solution
Here is the Sense code to test it out - example of a houses index, with an occupier type, and a field first_name:
DELETE /houses
# Index example docs
POST /houses/occupier/_bulk
{ "index": {}}
{ "first_name": "john" }
{ "index": {}}
{ "first_name": "john" }
{ "index": {}}
{ "first_name": "mark" }
# Basic aggregation
GET /houses/occupier/_search?search_type=count
{
"aggs" : {
"indexed_occupier_names" : {
"terms" : {
"field" : "first_name",
"size" : 0
}
}
}
}
Response
Response showing the relevant aggregation code. With two keys in the index, John and Mark.
....
"aggregations": {
"indexed_occupier_names": {
"buckets": [
{
"key": "john",
"doc_count": 2 <= 2 documents matching
},
{
"key": "mark",
"doc_count": 1 <= 1 document matching
}
]
}
}
....
A tool that helps me a lot to debug ElasticSearch is ElasticHQ. Basically, it is an HTML file with some JavaScript. No need to install anywhere, let alone in ES itself: just download it, unzip int and open the HTML file with a browser.
Not sure it is the best tool for ES heavy users. Yet, it is really practical to whoever is in a hurry to see the entries.
Kibana is also a good solution. It is a data visualization platform for Elastic.If installed it runs by default on port 5601.
Out of the many things it provides. It has "Dev Tools" where we can do your debugging.
For example you can check your available indexes here using the command
GET /_cat/indices
If you are using Google Chrome then you can simply use this extension named as Sense it is also a tool if you use Marvel.
https://chrome.google.com/webstore/detail/sense-beta/lhjgkmllcaadmopgmanpapmpjgmfcfig
Following #JanKlimo example, on terminal all you have to do is:
to see all the Index:
$ curl -XGET 'http://127.0.0.1:9200/_cat/indices?v'
to see content of Index products_development_20160517164519304:
$ curl -XGET 'http://127.0.0.1:9200/products_development_20160517164519304/_search?pretty=1'
We are trying to use Elastic Search in a Rails app and would like any input/code example on the implementation of "did you mean" feature. Essentially, we want to provide the end user an option to search for an alternate query like in google.
As of version 0.90.0.Beta1, ElasticSearch has a "term suggest" feature included, which is what you are looking for:
http://www.elasticsearch.org/guide/reference/api/search/term-suggest/
E.g. get from this query: "devloping distibutd saerch engies"
this result: "developing distributed search engines"
Elasticsearch doesn't have it yet, it is opened as issue here basically it is waiting for the next Lucene release.
I achieved a similar "did you mean" behaviour using the phonetic analyzers, which worked for my use case, location names, that is not gonna work for all use cases....
a example mapping:- https://gist.github.com/1171014
so you can query using the REST api like this (mispelled london):-
{
"query": {
"field": {
"nameSounds": "lundon"
}
}
}
You can use fuzzy search:
"fuzzy" : {
"user" : {
"value" : "Jon",
"boost" : 1.0,
"fuzziness" : 3,
"prefix_length" : 0,
"max_expansions": 100
}
}
Check this link for fuzzy : http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl-fuzzy-query.html
I am trying to store network layout in Couch DB, but my solution provides rather randomized graph.
I store a nodes with a document:
{_id ,
nodeName,
group}
and storing links in traditional:
{_id, source_id, target_id, value}
Following multiple tutorials on handling joins and multiple relationship in Couch DB I created view:
function(doc) {
if(doc.type == 'connection') {
if (doc.source_id)
emit("source", {'_id': doc.source_id});
if(doc.target_id)
emit("target", {'_id': doc.target_id});
}
}
which should have emitted sequence of source and target id, then I pass it to the list function with include_docs=true, assumes that source and target come in pairs stitches everything back in a structure like this:
{
"nodes":[
{"nodeName":"Name 1","group":"1"},
{"nodeName":"Name 2","group":"1"},
],
"links": [
{"source":7,"target":0,"value":1},
{"source":7,"target":5,"value":1}
]
}
Although my list produce a proper JSON, view map returns number of rows of source docs and then target docs.
So far I don't have any ideas how to make this thing working properly - I am happy to fetch additional values from document _id in the list, but so far I havn't find any good examples.
Alternative ways of achieving the same goal are welcome. _id values are standard for CouchDB so far.
Update: while writing a question I came up with different view which sorted my immediate problem, but I still would like to see other options.
updated map:
function(doc) {
if(doc.type == 'connection') {
if (doc.source_id)
emit([doc._id,0,"source"], {'_id': doc.source_id});
if(doc.target_id)
emit([doc._id,1,"target"], {'_id': doc.target_id});
}
}
Your updated map function makes more sense. However, you don't need 0 and 1 in your key since you have already "source"and "target".