Cypher: analog of `sort -u` to merge 2 collections? - neo4j

Suppose I have a node with a collection in a property, say
START x = node(17) SET x.c = [ 4, 6, 2, 3, 7, 9, 11 ];
and somewhere (i.e. from .csv file) I get another collection of values, say
c1 = [ 11, 4, 5, 8, 1, 9 ]
I'm treating my collections as just sets, order of elements does not matter. What I need is to merge x.c with c1 with come magic operation so that resulting x.c will contain only distinct elements from both. The following idea comes to mind (yet untested):
LOAD CSV FROM "file:///tmp/additives.csv" as row
START x=node(TOINT(row[0]))
MATCH c1 = [ elem IN SPLIT(row[1], ':') | TOINT(elem) ]
SET
x.c = [ newxc IN x.c + c1 WHERE (newx IN x.c AND newx IN c1) ];
This won't work, it will give an intersection but not a collection of distinct items.
More RTFM gives another idea: use REDUCE() ? but how?
How to extend Cypher with a new builtin function UNIQUE() which accept collection and return collection, cleaned form duplicates?
UPD. Seems that FILTER() function is something close but intersection again :(
x.c = FILTER( newxc IN x.c + c1 WHERE (newx IN x.c AND newx IN c1) )
WBR,
Andrii

How about something like this...
with [1,2,3] as a1
, [3,4,5] as a2
with a1 + a2 as all
unwind all as a
return collect(distinct a) as unique
Add two collections and return the collection of distinct elements.
dec 15, 2014 - here is an update to my answer...
I started with a node in the neo4j database...
//create a node in the DB with a collection of values on it
create (n:Node {name:"Node 01",values:[4,6,2,3,7,9,11]})
return n
I created a csv sample file with two columns...
Name,Coll
"Node 01","11,4,5,8,1,9"
I created a LOAD CSV statement...
LOAD CSV
WITH HEADERS FROM "file:///c:/Users/db/projects/coll-merge/load_csv_file.csv" as row
// find the matching node
MATCH (x:Node)
WHERE x.name = row.Name
// merge the collections
WITH x.values + split(row.Coll,',') AS combo, x
// process the individual values
UNWIND combo AS value
// use toInt as the values from the csv come in as string
// may be a better way around this but i am a little short on time
WITH toInt(value) AS value, x
// might as well sort 'em so they are all purdy
ORDER BY value
WITH collect(distinct value) AS values, x
SET x.values = values

You could use reduce like this:
with [1,2,3] as a, [3,4,5] as b
return reduce(r = [], x in a + b | case when x in r then r else r + [x] end)

Since Neo4j 3.0, with APOC Procedures you can easily solve this with apoc.coll.union(). In 3.1+ it's a function, and can be used like this:
...
WITH apoc.coll.union(list1, list2) as unionedList
...

Related

How do I find the maximum value between several specific nodes in neo4j?

EXAMPLE:
A unidirectional graph of the following type is given:
CREATE
(b0:Bar {id:1, value: 1}),
(b1:Bar {id:2, value: 4}),
(b2:Bar {id:3, value: 3}),
(b3:Bar {id:4, value: 5}),
(b4:Bar {id:5, value: 9}),
(b5:Bar {id:6, value: 7}),
(b0)-[:NEXT_BAR]->(b1),
(b1)-[:NEXT_BAR]->(b2),
(b2)-[:NEXT_BAR]->(b3),
(b3)-[:NEXT_BAR]->(b4),
(b4)-[:NEXT_BAR]->(b5);
MATCH (b1)->[*1..5]->(b2)->(b3)->[*1..5]->(b4)
WHERE // here you need to write a condition that the maximum value between the value of nodes b3 and b4 is greater than the maximum value of nodes b1 and b2
RETURN //b1_b2_max, b3_b4_max
In other words, the result should be as follows:
b1_b2_max | b3_b4_max
4 | 9
Can you tell me how I can find aggregated information between certain nodes (including these nodes)?
What should my request look like?
You could do something like this to get the right values.
// start with a set of slices you would like to get the max from
WITH [[1,3],[3,5]] AS slices
// match the path you want to get the slices from
MATCH path=(:Bar {id: 1})-[:NEXT_BAR*..5]->(end:Bar)
WHERE NOT (end)-->()
WITH slices, path
// look at the nodes in each slice of the path
UNWIND slices AS slice
// find the max value in the slice
UNWIND nodes(path)[slice[0]..slice[1]] AS b
RETURN 'b' + toString(slice[0]) + '_b' + toString(slice[1]-1) + '_max', max(b.value) AS max_value
Rather than returning the slice and max values in rows you can instead collect them as pairs and convert that to a map using apoc.map.fromPairs. Then access specific values in the map and return them as columns.
WITH [[1,3],[3,5]] AS slices
MATCH path=(:Bar {id: 1})-[:NEXT_BAR*..5]->(end:Bar)
WHERE NOT (end)-->()
WITH slices, path
UNWIND slices AS slice
UNWIND nodes(path)[slice[0]..slice[1]] AS b
WITH ['b' + toString(slice[0]) + '_b' + toString(slice[1]-1) + '_max', max(b.value)] AS pair
WITH collect(pair) AS pairs
RETURN apoc.map.fromPairs(pairs)['b1_b2_max'] AS b1_b2_max,
apoc.map.fromPairs(pairs)['b3_b4_max'] AS b3_b4_max

Neo4j - Check if sequential path exists

I want to see if a path exists for a graph, given a list of sequential properties to search for. The list can be of variable length.
This is my most recent attempt:
WITH ['a', 'b', 'c', 'd'] AS search_list // can be any list of strings
// FOREACH (i IN range(search_list) |
// MATCH (a:Node {prop:i})-->(b:Node {prop:i+1}))
// RETURN true if all relationships exist, false if not
This solution doesn't work because you can't use MATCH in a FOREACH. What should I do instead?
You can try to build a query manually for the match entire path and execute it using the function apoc.cypher.run:
WITH ['a', 'b', 'c', 'd'] AS search_list
WITH search_list,
'MATCH path = ' +
REDUCE(c = '', i in range(0, size(search_list) - 2) |
c + '(:Node {prop: $props[' + i + ']})-->'
) +
'(:Node {prop: $props[' + (size(search_list) - 1) +']}) ' +
'RETURN count(path) as pathCount' AS cypherQuery
CALL apoc.cypher.run(cypherQuery, {props: search_list}) YIELD value
RETURN CASE WHEN value.pathCount > 0
THEN true
ELSE false
END AS pathExists
Assuming you pass the list of property values in a $props parameter and the length of that list is 4, this query will first search for all paths of length 4 that have the desired start and end nodes (to narrow down the candidate paths), and then filter the interior nodes of the paths:
MATCH p=(a:Node {prop: $props[0]})-[*4]->(b:Node {prop: $props[-1]})
WITH p, NODES(p)[1..-2] AS midNodes
WHERE ALL(i IN RANGE(1, SIZE(midNodes)) WHERE midNodes[i-1] = $props[i])
RETURN p;
To increase efficiency, you should create an index on :Node(prop) as well.
If this query returns nothing, then there are no matching paths.

Cypher Neo4j query optimization

I use the following Cypher query:
MATCH (v:Value)-[:CONTAINS]->(hv:HistoryValue)
WHERE v.id = {valueId}
OPTIONAL MATCH (hv)-[:CREATED_BY]->(u:User)
WHERE {fetchCreateUsers}
WITH u, hv
ORDER BY hv.createDate DESC
WITH count(hv) as count, ceil(toFloat(count(hv)) / {maxResults}) as step, COLLECT({userId: u.id, historyValueId: hv.id, historyValue: hv.originalValue, historyValueCreateDate: hv.createDate}) AS data
RETURN REDUCE(s = [], i IN RANGE(0, count - 1, CASE step WHEN 0 THEN 1 ELSE step END) | s + data[i]) AS result, step, count
This query works fine and does exactly what I need.
Right now I'm concerned about two possible issues inside of this query from the performance point of view and Cypher best practices.
First of all, as you may see - I two times use the same count(hv) function. Will it cause the problems during the execution from the performance point of view or Cypher and Neo4j are smart enough to optimize it? If no, please show how to fix it.
And the second place is the CASE statement inside range() function? The same question here - will this CASE statement be executed only once or every time for every iteration over my range? Please show how to fix it if needed.
UPDATED
I tried to do a separator WITH for count but the query doesn't return the results(returns empty result)
MATCH (v:Value)-[:CONTAINS]->(hv:HistoryValue)
WHERE v.id = {valueId}
OPTIONAL MATCH (hv)-[:CREATED_BY]->(u:User)
WHERE {fetchCreateUsers}
WITH u, hv ORDER BY hv.createDate DESC
WITH u, hv, count(hv) as count
WITH u, hv, count, ceil(toFloat(count) / {maxResults}) as step, COLLECT({userId: u.id, historyValueId: hv.id, historyValue: hv.originalValue, historyValueCreateDate: hv.createDate}) AS data
RETURN REDUCE(s = [], i IN RANGE(0, count - 1, CASE step WHEN 0 THEN 1 ELSE step END) | s + data[i]) AS result, step, count
1 MATCH (v:Value)-[:CONTAINS]->(hv:HistoryValue)
2 WHERE v.id = {valueId}
3 OPTIONAL MATCH (hv)-[:CREATED_BY]->(u:User)
4 WHERE {fetchCreateUsers}
5 WITH u, hv
6 ORDER BY hv.createDate DESC
7 WITH count(hv) as count, ceil(toFloat(count(hv)) / {maxResults}) as step, COLLECT({userId: u.id, historyValueId: hv.id, historyValue: hv.originalValue, historyValueCreateDate: hv.createDate}) AS data
8 RETURN REDUCE(s = [], i IN RANGE(0, count - 1, CASE step WHEN 0 THEN 1 ELSE step END) | s + data[i]) AS result, step, count
(1) You need to pass hv in line 5, because it's values are collected in line 7. That said, you can still do something like this:
5 WITH u, collect(hv) AS hvs, count(hv) as count
UNWIND hvs AS hv
However, this is not very elegant and probably not worth doing.
(2) You can calculate the CASE expression in line 7:
7 WITH count, data, step, CASE step WHEN 0 THEN 1 ELSE step END AS stepFlag
8 RETURN REDUCE(s = [], i IN RANGE(0, count - 1, stepFlag) | s + data[i]) AS result, step, count

Cypher: find a path which takes the maximum valued step each time

I am trying to write a cypher query that finds a path between nodes a and b such that each step has the maximum timestamp value out of all available alternatives that is less than 15.
Here is my query so far, it does everything except for select the maximum possible timestamp at each step. How do I express this condition?
MATCH path=(a:NODE)-[rs:PARENT*]->(b:NODE)
WHERE a.name = 'SOME_VALUE' and b.name = 'SOME_OTHER_VALUE' AND ALL (r IN rs
WHERE r.timestamp < 15)
RETURN path
This is just awful sudo code but I think it expresses what I am looking for
MATCH path=(a:NODE)-[rs:PARENT*]->(b:NODE)
WHERE a.name = 'SOME_VALUE' and b.name = 'SOME_OTHER_VALUE' AND ALL (r IN rs
WHERE r.timestamp < 15 AND r.timestamp = max(allPossibleRsForThisStep))
RETURN path
Can this kind of query be written in cypher?
It won't be fast in cypher, it's possible to compute all maximum values first and then do what you want to do by compare the max value in a list with the current value.
Something like this (not sure if it works)
WITH range(1,10) as max_vals // a list with 10 values (actual values are not that important)
MATCH (a:NODE)-[rs:PARENT*..10]->(b:NODE)
WHERE a.name = 'SOME_VALUE' and b.name = 'SOME_OTHER_VALUE'
WITH a,b,
map(idx in range(0,size(rs)) |
max_vals[idx] = case when max_vals[idx]<rs[idx].timestamp then rs[idx].timestamp else max_vals[idx] end ), max_vals
MATCH path=(a)-[rs:PARENT*..10]->(b)
AND ALL (idx in range(0,size(rs) WHERE rs[idx].timestamp < 15 AND rs[idx].timestamp = max_vals[idx])
RETURN path

Filling missing data after outer join

I have two time series which are at the same sampling rate. I would like to perform an outer join and then fill in any missing data (post outer join, there can be points in time where data exists in one series but not the other even though they are the same sampling rate) with the most recent previous value.
How can I perform this operating using Deedle?
Edit:
Based on this, I suppose you can re-sample before the join like so:
// Get the most recent value, sampled at 2 hour intervals
someSeries|> Series.sampleTimeInto
(TimeSpan(2, 0, 0)) Direction.Backward Series.lastValue
After doing this you can safely Join. Perhaps there is another way?
You should be able to perform the outer join on the original series (it is better to turn them into frames, because then you'll get nice multi-column frame) and then fill the missing values Frame.fillMissing.
// Note that s1[2] is undefined and s2[3] is undefined
let s1 = series [ 1=>1.0; 3=>3.0; 5=>5.0 ]
let s2 = series [ 1=>1.1; 2=>2.2; 5=>5.5 ]
// Build frames to make joining easier
let f1, f2 = frame [ "S1" => s1 ], frame [ "S2" => s2 ]
// Perform outer join and then fill the missing data
let f = f1.Join(f2, JoinKind.Outer)
let res = f |> Frame.fillMissing Direction.Forward
The final result and the intermediate frame with missing values look like this:
val it : Frame<int,string> =
S1 S2
1 -> 1 1.1
2 -> <missing> 2.2
3 -> 3 <missing>
5 -> 5 5.5
>
val it : Frame<int,string> =
S1 S2
1 -> 1 1.1
2 -> 1 2.2
3 -> 3 2.2
5 -> 5 5.5
Note that the result can still contain missing values - if the first value is missing, the fillMissing function has no previous value to propagate and so the series may start with some missing values.

Resources