Unexpected neo4j cypher query result - neo4j

I would like to determine the relative percentage of conversation duration with neighbors who know specific person.
For example when observing node A first we have to know how much time he spent talking to all of his neighbors which is executed with the following query:
neo4j-sh (0)$ start a = node(351061) match (a)-[r:TALKED_TO]->(b) return sum(r.duration)
==> +-----------------+
==> | sum(r.duration) |
==> +-----------------+
==> | 12418 |
==> +-----------------+
==> 1 row, 0 ms
Next we have to check which of his neighbors know specific person (say c) and sum only the durations of conversations among a and b where b knows c:
neo4j-sh (0)$ start a = node(351061) match (a)-[r:TALKED_TO]->(b)-[p:KNOWS]->(c) return sum(r.duration)
==> +-----------------+
==> | sum(r.duration) |
==> +-----------------+
==> | 21013 |
==> +-----------------+
==> 1 row, 0 ms
What here doesn't seem logical is that the second sum is larger than first one whereas the second one is supposed to be just the part of first. Does anyone know what could be the problem for getting such result? The error appeared on 7 users out of 15000.

You're not looking at a specific person C in that query. You're matching all paths to any :KNOWS relationship, so if you have a->b->c and a->b->d your duration between a->b will get counted twice.
What you probably need to do is this instead:
start a = node(351061), c=node(xxxxx) // set c explicitly
match (a)-[r:TALKED_TO]->(b)
where b-[:KNOWS]->c // putting this in the where clause forces you to set C
return sum(r.duration)
Here's an example in console:
http://console.neo4j.org/r/irm0zy
Remember that match broadens and where tightens the results. You can also do this with match, but you need to specify c in start.
A good way to test out what your aggregate functions are doing is to return all of your named variables (or set a path you can return)--this way you see the aggregation separated into subtotals. Like so:
start a=node(1)
match a-[r:TALKED_TO]->b-[:KNOWS]->c
return sum(r.duration), a,b,c;
+-----------------------------------------------------------------------------------------------+
| sum(r.duration) | a | b | c |
+-----------------------------------------------------------------------------------------------+
| 20 | Node[1]{name:"person1"} | Node[2]{name:"person2"} | Node[4]{name:"person4"} |
| 20 | Node[1]{name:"person1"} | Node[2]{name:"person2"} | Node[3]{name:"person3"} |
| 20 | Node[1]{name:"person1"} | Node[5]{name:"person5"} | Node[6]{name:"person6"} |
+-----------------------------------------------------------------------------------------------+

Related

How can I calculate the path between two nodes with hops in range (1,5) (neo4j) ?

I'm trying to find all possible path between two nodes. I've used few cypher queries which does the required job but it take a lot of time if the hops increases. This is the query
match p = (n{name:"Node1"})-[:Route*1..5]-(b{name:"Node2"}) return p
Also if I use shortestpath it limits the result if a path with minimum hop is found. So I don't get the results with 2 or more than two hops if a direct connection (1 hop) is found between the nodes.
match p = shortestpath((n{name:"Node1"})-[:Route*1..5]-(b{name:"Node2"})) return p
and if I increase the hop to 2 or more it throws an exception.
shortestPath(...) does not support a minimal length different from 0 or 1
Is there any other alternative framework or algorithm to get all path with minimum time ?
P.S. I'm looking for something in the order of ms. Currently all queries with hops greater than 3 takes few seconds to complete.
I gather you are trying to speed up your original query involving variable-length paths. The shortestpath function is not appropriate for your query, as it literally tries to find a shortest path -- not all paths up to a certain length.
The execution plan for your original query (using sample data) looks like this:
+-----------------------+----------------+------+---------+-------------------+---------------------------------------------+
| Operator | Estimated Rows | Rows | DB Hits | Identifiers | Other |
+-----------------------+----------------+------+---------+-------------------+---------------------------------------------+
| +ProduceResults | 0 | 1 | 0 | p | p |
| | +----------------+------+---------+-------------------+---------------------------------------------+
| +Projection | 0 | 1 | 0 | anon[30], b, n, p | ProjectedPath(Set(anon[30], n),) |
| | +----------------+------+---------+-------------------+---------------------------------------------+
| +Filter | 0 | 1 | 2 | anon[30], b, n | n.name == { AUTOSTRING0} |
| | +----------------+------+---------+-------------------+---------------------------------------------+
| +VarLengthExpand(All) | 0 | 2 | 7 | anon[30], b, n | (b)<-[:Route*]-(n) |
| | +----------------+------+---------+-------------------+---------------------------------------------+
| +Filter | 0 | 1 | 3 | b | b.name == { AUTOSTRING1} |
| | +----------------+------+---------+-------------------+---------------------------------------------+
| +AllNodesScan | 3 | 3 | 4 | b | |
+-----------------------+----------------+------+---------+-------------------+---------------------------------------------+
So, your original query is scanning through every node to find the node(s) that match the b pattern. Then, it expands all variable-length paths starting at b. And then it filters those paths to find the one(s) that end with a node that matches the pattern for n.
Here are a few suggestions that should speed up your query, although you'll have to test it on your data to see by how much:
Give each node a label. For example, Foo.
Create an index that can speed up the search for your end nodes. For example:
CREATE INDEX ON :Foo(name);
Modify your query to force the use of the index on both end nodes. For example:
MATCH p =(n:Foo { name:"Node1" })-[:Route*1..5]-(b:Foo { name:"Node2" })
USING INDEX n:Foo(name)
USING INDEX b:Foo(name)
RETURN p;
After the above changes, the execution plan is:
+-----------------+------+---------+-----------------------------+-----------------------------+
| Operator | Rows | DB Hits | Identifiers | Other |
+-----------------+------+---------+-----------------------------+-----------------------------+
| +ColumnFilter | 1 | 0 | p | keep columns p |
| | +------+---------+-----------------------------+-----------------------------+
| +ExtractPath | 1 | 0 | anon[33], anon[34], b, n, p | |
| | +------+---------+-----------------------------+-----------------------------+
| +PatternMatcher | 1 | 3 | anon[33], anon[34], b, n | |
| | +------+---------+-----------------------------+-----------------------------+
| +SchemaIndex | 1 | 2 | b, n | { AUTOSTRING1}; :Foo(name) |
| | +------+---------+-----------------------------+-----------------------------+
| +SchemaIndex | 1 | 2 | n | { AUTOSTRING0}; :Foo(name) |
+-----------------+------+---------+-----------------------------+-----------------------------+
This query plan uses the index to directly get the b and n nodes -- without scanning. This, by itself, should provide a speed improvement. And then this plan uses the "PatternMatcher" to find the variable-length paths between those end nodes. You will have to try this query out to see how efficient the "PatternMatcher" is in doing that.
From your description I assume that you want to get a shortest path based on some weight like a duration property on the :Route relationships.
If that is true using shortestPath in cypher is not helpful since it just takes into account the number of hops. Weighted shortest paths are not yet available in Cypher in an efficient way.
The Java API has support for weighted shortest paths via dijekstra or astar via the GraphAlgoFactory class. For the simple case that your cost function is just the value of a relationship property (as mentioned above) you can also use an existing REST endpoint.

Neo4J search greater than on Index

I have a Neo4J database with a large number of datasets (~15M), where I want to perform a greater than search on one of its properties. I have the corresponding property indexed. The property is a float value.
When I do an exact match like MATCH (i:Label) WHERE i.property = $value RETURN count(i) I get the result within a very short time. But when I do the same search with greater than, i.e. MATCH (i:Label) WHERE i.property > $value RETURN count(i) it just takes forever. What is the correct way to do this in Cypher?
Edit: Execution plan:
+--------------------------------------------+
| No data returned, and nothing was changed. |
+--------------------------------------------+
74 ms
Compiler CYPHER 2.2
Planner COST
EagerAggregation
|
+Filter
|
+NodeByLabelScan
+------------------+---------------+-------------+------------------------------+
| Operator | EstimatedRows | Identifiers | Other |
+------------------+---------------+-------------+------------------------------+
| EagerAggregation | 2064 | count(r) | |
| Filter | 4260557 | r | r.date > Subtract(Divide( |
| | | | TimestampFunction(),{ |
| | | | AUTOINT0}),Literal(86400)) |
| NodeByLabelScan | 14201858 | r | :Request |
+------------------+---------------+-------------+------------------------------+
Total database accesses: ?
Another approach is to create additional/aggregation nodes for that property and searching thru those nodes.
Example
Let say the property is a value from 0 - 100.
Create following nodes
* 0to30
* 31to60
* 61to100
Create relationship from you nodes to this 'aggregate' nodes.
Than searching thru those nodes
MATCH (l:Label)-[i:IN]->(a:0to30)
RETURN l
Unfortunately Neo4j doesn't use it's indexes for inequalities in 2.2.x. In the upcoming 2.3.x this should be supported.

Make my Neo4j queries faster

I'm evaluating Neo4j for our application, and now am at a point where performance is an issue. I've created a lot of nodes and edges that I'm doing some queries against. The following is a detail of the nodes and edges data in this database:
I am trying to do a search that traverses the yellow arrows of this diagram. What I have so far is the following query:
MATCH (n:LABEL_TYPE_Project {id:'14'})
-[:RELATIONSHIP_scopes*1]->(m:LABEL_TYPE_PermissionNode)
-[:RELATIONSHIP_observedBy*1]->(o:LABEL_TYPE_Item)
WHERE m.id IN ['1', '2', '6', '12', '12064', '19614', '19742', '19863', '21453', '21454', '21457', '21657', '21658', '31123', '31127', '31130', '47691', '55603', '55650', '56026', '56028', '56029', '56050', '56052', '85383', '85406', '85615', '105665', '1035242', '1035243']
AND o.content =~ '.*some string.*'
RETURN o
LIMIT 20
(The variable paths above have been updated, see "Update 2")
The above query takes a barely-acceptable 1200ms. It only returns the requested 20 items. If I want a count of the same, this takes forever:
MATCH ... more of the same ...
RETURN count(o)
The above query takes many minutes. This is Neo4j 2.2.0-M03 Community running on CentOS. There is around 385,000 nodes, 170,000 of type Item.
I have created indices on all id fields (programmatically, index().forNodes(...).add(...)), also on the content field (CREATE INDEX ... statement).
Are there fundamental improvements yet to be made to my queries? Things I can try?
Much appreciated.
This question was moved over from Neo4j discussion group on Google per their suggestions.
Update 1
As requested:
:schema
Gives:
Indexes
ON :LABEL_TYPE_Item(id) ONLINE
ON :LABEL_TYPE_Item(active) ONLINE
ON :LABEL_TYPE_Item(content) ONLINE
ON :LABEL_TYPE_PermissionNode(id) ONLINE
ON :LABEL_TYPE_Project(id) ONLINE
No constraints
(This is updated, see "Update 2")
Update 2
I have made the following noteworthy improvements to the query:
Shame on me, I did have super-nodes for all TYPE_Projects (not by design, just messed up the importing algorithm that I was using) and I removed it now
I had a lot of "strings" that could have been proper data types, such as integers, booleans and I am now importing them as such (you'll see in the updated queries below that I removed a lot of quotes)
As pointed out, I had variable length paths and I fixed those
As pointed out, I should have had uniqueness indices instead of regular indices and I fixed that
As a consequence:
:schema
Now gives:
Indexes
ON :LABEL_TYPE_Item(active) ONLINE
ON :LABEL_TYPE_Item(content) ONLINE
ON :LABEL_TYPE_Item(id) ONLINE (for uniqueness constraint)
ON :LABEL_TYPE_PermissionNode(id) ONLINE (for uniqueness constraint)
ON :LABEL_TYPE_Project(id) ONLINE (for uniqueness constraint)
Constraints
ON (label_type_item:LABEL_TYPE_Item) ASSERT label_type_item.id IS UNIQUE
ON (label_type_project:LABEL_TYPE_Project) ASSERT label_type_project.id IS UNIQUE
ON (label_type_permissionnode:LABEL_TYPE_PermissionNode) ASSERT label_type_permissionnode.id IS UNIQUE
The query now looks like this:
MATCH (n:LABEL_TYPE_Project {id:14})
-[:RELATIONSHIP_scopes]->(m:LABEL_TYPE_PermissionNode)
-[:RELATIONSHIP_observedBy]->(o:LABEL_TYPE_Item)
WHERE m.id IN [1, 2, 6, 12, 12064, 19614, 19742, 19863, 21453, 21454, 21457, 21657, 21658, 31123, 31127, 31130, 47691, 55603, 55650, 56026, 56028, 56029, 56050, 56052, 85383, 85406, 85615, 105665, 1035242, 1035243]
AND o.content =~ '.*some string.*'
RETURN o
LIMIT 20
The above query now takes approx. 350ms.
I still want a count of the same:
MATCH ...
RETURN count(0)
The above query now takes approx. 1100ms. Although that's much better, and barely acceptable for this particular query, I've already found some more-complex queries that inherently take longer. So a further improvement on this query here would be great.
As requested here is the PROFILE for RETURN o query (for the improved query):
Compiler CYPHER 2.2
Planner COST
Projection
|
+Limit
|
+Filter(0)
|
+Expand(All)(0)
|
+Filter(1)
|
+Expand(All)(1)
|
+NodeUniqueIndexSeek
+---------------------+---------------+-------+--------+-------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Operator | EstimatedRows | Rows | DbHits | Identifiers | Other |
+---------------------+---------------+-------+--------+-------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Projection | 1900 | 20 | 0 | m, n, o | o |
| Limit | 1900 | 20 | 0 | m, n, o | { AUTOINT32} |
| Filter(0) | 1900 | 20 | 131925 | m, n, o | (hasLabel(o:LABEL_TYPE_Item) AND Property(o,content(23)) ~= /{ AUTOSTRING31}/) |
| Expand(All)(0) | 4993 | 43975 | 43993 | m, n, o | (m)-[:RELATIONSHIP_observedBy]->(o) |
| Filter(1) | 2 | 18 | 614 | m, n | (hasLabel(m:LABEL_TYPE_PermissionNode) AND any(-_-INNER-_- in Collection(List({ AUTOINT1}, { AUTOINT2}, { AUTOINT3}, { AUTOINT4}, { AUTOINT5}, { AUTOINT6}, { AUTOINT7}, { AUTOINT8}, { AUTOINT9}, { AUTOINT10}, { AUTOINT11}, { AUTOINT12}, { AUTOINT13}, { AUTOINT14}, { AUTOINT15}, { AUTOINT16}, { AUTOINT17}, { AUTOINT18}, { AUTOINT19}, { AUTOINT20}, { AUTOINT21}, { AUTOINT22}, { AUTOINT23}, { AUTOINT24}, { AUTOINT25}, { AUTOINT26}, { AUTOINT27}, { AUTOINT28}, { AUTOINT29}, { AUTOINT30})) where Property(m,id(0)) == -_-INNER-_-)) |
| Expand(All)(1) | 11 | 18 | 19 | m, n | (n)-[:RELATIONSHIP_scopes]->(m) |
| NodeUniqueIndexSeek | 1 | 1 | 1 | n | :LABEL_TYPE_Project(id) |
+---------------------+---------------+-------+--------+-------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
And here is the PROFILE for RETURN count(o) query (for the improved query):
Compiler CYPHER 2.2
Planner COST
Limit
|
+EagerAggregation
|
+Filter(0)
|
+Expand(All)(0)
|
+Filter(1)
|
+Expand(All)(1)
|
+NodeUniqueIndexSeek
+---------------------+---------------+--------+--------+-------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Operator | EstimatedRows | Rows | DbHits | Identifiers | Other |
+---------------------+---------------+--------+--------+-------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Limit | 44 | 1 | 0 | count(o) | { AUTOINT32} |
| EagerAggregation | 44 | 1 | 0 | count(o) | |
| Filter(0) | 1900 | 101 | 440565 | m, n, o | (hasLabel(o:LABEL_TYPE_Item) AND Property(o,content(23)) ~= /{ AUTOSTRING31}/) |
| Expand(All)(0) | 4993 | 146855 | 146881 | m, n, o | (m)-[:RELATIONSHIP_observedBy]->(o) |
| Filter(1) | 2 | 26 | 850 | m, n | (hasLabel(m:LABEL_TYPE_PermissionNode) AND any(-_-INNER-_- in Collection(List({ AUTOINT1}, { AUTOINT2}, { AUTOINT3}, { AUTOINT4}, { AUTOINT5}, { AUTOINT6}, { AUTOINT7}, { AUTOINT8}, { AUTOINT9}, { AUTOINT10}, { AUTOINT11}, { AUTOINT12}, { AUTOINT13}, { AUTOINT14}, { AUTOINT15}, { AUTOINT16}, { AUTOINT17}, { AUTOINT18}, { AUTOINT19}, { AUTOINT20}, { AUTOINT21}, { AUTOINT22}, { AUTOINT23}, { AUTOINT24}, { AUTOINT25}, { AUTOINT26}, { AUTOINT27}, { AUTOINT28}, { AUTOINT29}, { AUTOINT30})) where Property(m,id(0)) == -_-INNER-_-)) |
| Expand(All)(1) | 11 | 26 | 27 | m, n | (n)-[:RELATIONSHIP_scopes]->(m) |
| NodeUniqueIndexSeek | 1 | 1 | 1 | n | :LABEL_TYPE_Project(id) |
+---------------------+---------------+--------+--------+-------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
Remaining suggestions:
Use MATCH ... WITH x MATCH ...->(x) syntax: this did not help me at all, so far
Use Lucene indexes: still to do See results in "Update 3"
Use precomputation: this will not help me, since the queries are going to be rather variant
Update 3
I've been playing with full-text search, and indexed the content property as follows:
IndexManager indexManager = getGraphDb().index();
Map<String, String> customConfiguration = MapUtil.stringMap(IndexManager.PROVIDER, "lucene", "type", "fulltext");
Index<Node> index = indexManager.forNodes("INDEX_FULL_TEXT_content_Item", customConfiguration);
index.add(node, "content", value);
When I run the following query this takes approx. 1200ms:
START o=node:INDEX_FULL_TEXT_content_Item("content:*some string*")
MATCH (n:LABEL_TYPE_Project {id:14})
-[:RELATIONSHIP_scopes]->(m:LABEL_TYPE_PermissionNode)
-[:RELATIONSHIP_observedBy]->(o:LABEL_TYPE_Item)
WHERE m.id IN [1, 2, 6, 12, 12064, 19614, 19742, 19863, 21453, 21454, 21457, 21657, 21658, 31123, 31127, 31130, 47691, 55603, 55650, 56026, 56028, 56029, 56050, 56052, 85383, 85406, 85615, 105665, 1035242, 1035243]
RETURN count(o);
Here is the PROFILE for this query:
Compiler CYPHER 2.2
Planner COST
EagerAggregation
|
+Filter(0)
|
+Expand(All)(0)
|
+NodeHashJoin
|
+Filter(1)
| |
| +NodeByIndexQuery
|
+Expand(All)(1)
|
+NodeUniqueIndexSeek
+---------------------+---------------+--------+--------+-------------+------------------------------------------------------------------------+
| Operator | EstimatedRows | Rows | DbHits | Identifiers | Other |
+---------------------+---------------+--------+--------+-------------+------------------------------------------------------------------------+
| EagerAggregation | 50 | 1 | 0 | count(o) | |
| Filter(0) | 2533 | 166 | 498 | m, n, o | (Property(n,id(0)) == { AUTOINT0} AND hasLabel(n:LABEL_TYPE_Project)) |
| Expand(All)(0) | 32933 | 166 | 332 | m, n, o | (m)<-[:RELATIONSHIP_scopes]-(n) |
| NodeHashJoin | 32933 | 166 | 0 | m, o | o |
| Filter(1) | 1 | 553 | 553 | o | hasLabel(o:LABEL_TYPE_Item) |
| NodeByIndexQuery | 1 | 553 | 554 | o | Literal(content:*itzndby*); INDEX_FULL_TEXT_content_Item |
| Expand(All)(1) | 64914 | 146855 | 146881 | m, o | (m)-[:RELATIONSHIP_observedBy]->(o) |
| NodeUniqueIndexSeek | 27 | 26 | 30 | m | :LABEL_TYPE_PermissionNode(id) |
+---------------------+---------------+--------+--------+-------------+------------------------------------------------------------------------+
Things to think about/try: in general with query optimization, the #1 name of the game is to figure out ways to consider less data in the first place, in the answering of the query. It's far less fruitful to consider the same data faster, than it is to consider less data.
Lucene indexes on your content fields. My understanding is that regex you're doing isn't narrowing cypher's search path any, so it's basically having to look at every o:LABEL_TYPE_Item and run the regex against that field. Your regex is only looking for a substring though, so lucene may help cut down the number of nodes cypher has to consider before it can give you a result.
Your relationship paths are variable length, (-[:RELATIONSHIP_scopes*1]->) yet the image you give us suggests you only ever need one hop. On both relationship hops, depending on how your graph is structured (and how much data you have) you might be looking through way more information than you need to there. Consider those relationship hops and your data model carefully; can you replace with -[:RELATIONSHIP_scopes]-> instead? Note that you have a WHERE clause on m nodes, you may be traversing more of those than required.
Check the query plan (via PROFILE, google for docs). One trick I see a lot of people using is pushing the most restrictive part of their query to the top, in front of a WITH block. This reduces the number of "starting points".
What I mean is taking this query...
MATCH (foo)-[:stuff*]->(bar) // (bunch of other complex stuff)
WHERE bar.id = 5
RETURN foo
And turning it into this:
MATCH bar
WHERE bar.id = 5
WITH bar
MATCH (foo)-[:stuff*]->(bar)
RETURN foo;
(Check output via PROFILE, this trick can be used to force the query execution plan to do the most selective thing first, drastically reducing the amount of the graph that cypher considers/traverses...better performance)
Precompute; if you have a particular set of nodes that you use all the time (those with the IDs you identify) you can create a custom index node of your own. Let's call it (foo:SpecialIndex { label: "My Nifty Index" }). This is akin to a "view" in a relational database. You link the stuff you want to access quickly to foo. Then your query, instead of having that big WHERE id IN [blah blah] clause, it simply looks up foo:SpecialIndex, traverses to the hit points, then goes from there. This trick works well when the list of entry points in your list of IDs is large, rapidly growing, or both. This keeps all the same computation you'd do normally, but shifts some of it to be done ahead of time so you don't do it every time you run the query.
Got any supernodes in that graph? (A supernode is an extremely densely connected node, i.e. one with a million outbound relationships) -- don't do that. Try to arrange your data model such that you don't have supernodes, if at all possible.
JVM/Node Cache tweaks. Sometimes you can get an advantage by changing your node caching strategy, or available memory to do the caching. The idea here is that instead of hitting data on disk, if you warm your cache up then you get at least some of the I/O out of the way. This one can help in some cases, but it wouldn't be my first stop unless the way you've configured the JVM or neo4j is already somewhat memory-poor. This one probably also helps you a little less because it tries to make your current access pattern faster, rather than improving your actual access pattern.
can you share your output of :schema in the browser?
if you don't have it do:
create constraint on (p:LABEL_TYPE_Project) assert p.id is unique;
create constraint on (m:LABEL_TYPE_PermissionNode) assert m.id is unique;
The manual indexes you created only help for Item.content if you index it with FULLTEXT_CONFIG and then use START o=node:items("content:(some string)") MATCH ...
As in Neo4j you can always traverse relationships in both directions, you don't need the inverse relationships, it only hurts performance because queries then tend to check one cycle more.
You don't need variable length paths [*1] in your query, change it to:
MATCH (n:LABEL_TYPE_Project {id:'14'})-[:RELATIONSHIP_scopes]->
(m:LABEL_TYPE_PermissionNode)-[:RELATIONSHIP_observedBy]->(o:LABEL_TYPE_Item)
WHERE m.id in ['1', '2', ... '1035242', '1035243']
AND o.content =~ '.*itzndby.*' RETURN o LIMIT 20
For real queries use parameters, for project-id and permission.id ->
MATCH (n:LABEL_TYPE_Project {id: {p_id}})-[:RELATIONSHIP_scopes]->(m:LABEL_TYPE_PermissionNode)-[:RELATIONSHIP_observedBy]->(o:LABEL_TYPE_Item)
WHERE m.id in {m_ids} AND o.content =~ '.*'+{item_content}+'.*'
RETURN o LIMIT 20
remember a realistic query performance only shows up on a warmed up system, so run the query at least twice
you might also want to split up your query
MATCH (n:LABEL_TYPE_Project {id: {p_id}})-[:RELATIONSHIP_scopes]->(m:LABEL_TYPE_PermissionNode)
WHERE m.id in {m_ids}
WITH distinct m
MATCH (m)-[:RELATIONSHIP_observedBy]->(o:LABEL_TYPE_Item)
WHERE o.content =~ '.*'+{item_content}+'.*'
RETURN o LIMIT 20
Also learn about PROFILE you can prefix your query it in the old webadmin: http://localhost:7474/webadmin/#/console/
If you use Neo4j 2.2-M03 there is built in support for query plan visualization with EXPLAIN and PROFILE prefixes.

Extremely slow weighted-edge query in Neo4j

I have a graph with about 1.2 million nodes, and roughly 3.85 million relationships between them. I need to find the top x weighted edges - that is to say, the top x tuples of a, b, n where a and b are unique pairs of vertices and n is the number of connections between them. I am currently getting this data via the following Cypher query:
MATCH (n)-[r]->(x)
WITH n, x, count(r) as weight
ORDER BY weight DESC
LIMIT 50
RETURN n.screen_name, r.screen_name, weight;
This query takes about 25 seconds to run, which is far too slow for my needs. I ran the query with the profiler, which returned the following:
ColumnFilter(0)
|
+Extract
|
+ColumnFilter(1)
|
+Top
|
+EagerAggregation
|
+TraversalMatcher
+------------------+---------+---------+-------------+------------------------------------------------------------------------------------------------+
| Operator | Rows | DbHits | Identifiers | Other |
+------------------+---------+---------+-------------+------------------------------------------------------------------------------------------------+
| ColumnFilter(0) | 50 | 0 | | keep columns n.twitter, x.twitter, weight |
| Extract | 50 | 200 | | n.twitter, x.twitter |
| ColumnFilter(1) | 50 | 0 | | keep columns n, x, weight |
| Top | 50 | 0 | | { AUTOINT0}; Cached( INTERNAL_AGGREGATE01a74d75-74df-42f8-adc9-9a58163257d4 of type Integer) |
| EagerAggregation | 3292734 | 0 | | n, x |
| TraversalMatcher | 3843717 | 6245164 | | x, r, x |
+------------------+---------+---------+-------------+------------------------------------------------------------------------------------------------+
My questions, then, are these:
1. Are my expectations way off, and is this the sort of thing that's just going to be slow? It's functionally a map/reduce problem, but this isn't that large of a data set - and it's just test data. The real thing will have a lot more (but be filterable by relationship properties; I'm working on a representative sample).
2. What the heck can I do to make this run faster? I've considered using a start statement but that doesn't seem to help. Actually, it seems to make it worse.
3. What don't I know here, and where can I go to find out that I don't know it?
Thanks,
Chris
You're hitting the database 6,245,164 in the first statement: MATCH (n)-[r]->(x).
It looks like what you're attempting to do is a graph global query -- that is, you're trying to run query over the entire graph. By doing this you're not taking advantage of indexing that reduces the number of hits to the database.
In order to do this at the level of performance you're needing, an unmanaged extension may be required.
http://docs.neo4j.org/chunked/stable/server-unmanaged-extensions.html
Also, a good community resource to learn about unmanaged extensions: http://www.maxdemarzi.com/
The approach here would be to create a REST API method that extends the Neo4j server. This requires some experience programming with Java.
Alternatively you may be able to run an iterative Cypher query that updates an aggregate count on each [r] between (n) and (x).

Neo4j cypher query to return relationship property and sum of all matching relationship properties

I'm trying to return a relationship property (called proportion) plus the sum of that property for all relationship matched by a Cypher query in Neo4j. I've gotten this far:
START alice=node(3)
MATCH p=(alice)<-[r:SUPPORTED_BY]-(n)
RETURN reduce(total=0, rel in relationships(p): total + rel.proportion), sum(r.proportion) AS total;
This returns:
+-----------------+
| reduced | total |
+-----------------+
| 2 | 2 |
| 1 | 1 |
+-----------------+
where I was expecting:
+-----------------+
| reduced | total |
+-----------------+
| 2 | 3 |
| 1 | 3 |
+-----------------+
As a beginner user of Cypher, I'm not really sure how to approach this query; I'm clearly not using reduce correctly. Any advice would be appreciated.
You need to use WITH to split up the query into two parts:
find the sum of all proportions, and pass that as bound name to the next part
find the individual proportions
.
START alice=node(3)
MATCH alice<-[r:SUPPORTED_BY]-()
WITH sum(r.proportion) AS total
MATCH alice<-[r:SUPPORTED_BY]-(other)
RETURN other.name, r.proportion, total

Resources