I upgraded a Neo4J v3.3 to v3.4 to try out the new spatial and temporal functions.
I'm trying very simple queries. Once with the date function and one without. The results are different.
match (r:Model) where r.open_date>"2018-04-26" return count(r);
Result is 19283.
match (r:Model) where r.open_date>date("2018-04-26") return count(r);
Result is 0.
What is the way to use the new functions?
[EDITED]
The new temporal types, like Date and Duration, are really special types, and it does not make sense to compare them directly to strings or numbers.
Assuming r.open_date has the right format, this should work:
MATCH (r:Model)
WHERE DATE(r.open_date) > DATE("2018-04-26")
RETURN
Also, the the following query may be more performant (since a second DATE object does not need to be constructed):
MATCH (r:Model)
WHERE TOSTRING(DATE(r.open_date)) > "2018-04-26"
RETURN
Related
I'm trying to get a simple date-time comparison to work, but the query doesn't return any results.
The query is
MATCH (n:Event) WHERE n.start_datetime > datetime("2019-06-01T18:40:32.142+0100") RETURN n.start_datetime
According to this documentation page, this type of comparisons should work. I've also tried creating the datetime object explicitly, for instance with datetime({year: 2019, month: 7}).
I've checked that the start_datetime is in fact well formatted, by checking if the values start_datetime.year for example was correct, and couldn't find any error.
Given that all the records in the database are from 2021, the above query should return every event, yet is returning nothing.
Doing the query using only the year comparison instead of doing full datetime comparison works:
MATCH (n:Event) WHERE n.start_datetime.year > datetime("2019-06-01T18:40:32.142+0100").year RETURN n.start_datetime
Double check the data type of start_datetime. It can be either in epoch seconds or epoch milliseconds. You need to convert the epoch format to datetime, so that both are on the same data type. The reason that your 2nd query works (.year) is because .year returns an integer value.
Run below to get samples:
MATCH (n:Event)
RETURN distinct n.start_datetime LIMIT 5
Then if you see that it is 10 digits then it is in epochSeconds. If yes, then run below query:
MATCH (n:Event)
WHERE n.start_datetime is not null
AND datetime({epochSeconds: n.start_datetime}) > datetime("2019-06-01T18:40:32.142+0100")
RETURN n.start_datetime
LIMIT 25
It turns out the error was due to the timezone. Neo4j had saved the properties as LocalDateTime, which apparently can't be compared to ZonedDateTime.
I used py2neo for most of the nodes management, and the solution was to give a specific timezone to the python property. This was done (in my case) using:
datetime.datetime.fromtimestamp(kwargs["end"], pytz.UTC)
After that, I was able to do the comparisons.
Hopes this saves a couple of hours to future developers.
I've been running this cypher
CREATE (b:Request {created_at: timestamp(), depth:$depth})"
which relies on the cypher timestamp() function.
The documentation calls out
The timestamp() function returns the equivalent value of datetime().epochMillis.
meaning I could do the time calculation on the server-side,
though having access to the scalar functions I'd like to know if it's A) possible, and B) if possible how to use.
b = Node("Request", created_at= timestamp(), depth= 0) did not even a little bit work.
Note that if you do this, you are generating a timestamp client-side rather than server-side and you should be aware of the implications of this, re clock sync, etc.
That said, you'll need to pass an actual Python datetime instance (Cypher can only be evaluated server-side):
from datetime import datetime
b = Node("Request", created_at=datetime.now(), depth=0)
You'll need to make sure you're running at least py2neo 4.2 and Neo4j 3.4 for this to be available.
I have an auditing filed for all of my entities:
createDate
updateDate
I always initialize createDate during the entity creation but updateDate can contain NULL until the first update.
I have to implement sorting feature over these fields.
With createDate everything works fine but with updateDate I have issues.
In case of a mixed set of NULLs and Dates in updateDate during the descending sort, I have the NULLs first and this is not the something I'm expecting here.
I understand that according to the Neo4j documentation, this is an expecting behavior - When sorting the result set, null will always come at the end of the result set for ascending sorting, and first when doing descending sort. but I don't know right now how to implement the proper sorting from the user perspective where the user will see the latest updated documents at the top of the list. Some time ago I have even created GitHub issue for this feature https://github.com/opencypher/openCypher/issues/238
One workaround I can see here - is to populate also updateDate together with createDate during the entity creation but I really hate this solution.
Are there any other solutions in order to properly implement this task?
You can try using the coalesce() function. It will return the first non-null value in the list of expressions passed to it.
MATCH (n:Node)
RETURN n
ORDER BY coalesce(n.updateDate, 0) DESC
EDIT:
From comments:
on the database level it is something like this: "updateDate":
"2017-09-07T22:27:11.012Z". On the SDN4 level it is a Java -
java.util.Date type
In this case you can change the 0 by a date representing an Start-Of-Time constant (like "1970-01-01T00:00:00.000Z").
MATCH (n:Node)
RETURN n
ORDER BY coalesce(n.updateDate, "1970-01-01T00:00:00.000Z") DESC
I'd just use the createDate as the updateDate when updateDate IS NULL:
MATCH (n:Node)
RETURN n
ORDER BY coalesce(n.updateDate, n.createDate) DESC
You may want to consider storing your ISO 8601 timestamp strings as (millisecond) integers instead. That could make most queries that involve datetime manipulations more efficient (or even possible), and would also use up less DB space compared to the equivalent string.
One way to do that conversion is to use the APOC function apoc.date.parse. For example, this converts 2017-09-07T22:27:11.012Z to an integer (in millisecond units):
apoc.date.parse('2017-09-07T22:27:11.012Z', 'ms', "yyyy-MM-dd'T'HH:mm:ss.SSSX")
With this change to your data model, you could also initialize updateDate to 0 at node creation time. This would allow you to avoid having to use COALESCE(n.updateDate, 0) for sorting purposes (as suggested by #Bruno Peres),
and the 0 value would serve as an indication that the node was never updated.
(But the drawback would be that all nodes would have an updateDate property, even the ones that were never updated.)
I'm using NEO4J 3.0 and it seems that HAS function was removed.
Type of myrelationship is a date and I'm looking to retrieve all relation between two dates such as my property "a" is greater than certain value.
How can I test this using NEO4j
Thank you
[EDITED to add info from comments]
I have tried this:
MATCH p=(n:origin)-[r]->()
WHERE r>'2015-01'
RETURN AVG(r.amount) as totalamout;
I created relationship per date and each one has a property, amount, and I am looking to compute the average amount for certain period. As example, average amount since 2015-04.
To answer the issue raised by your first sentence: in neo4j 3.x, the HAS() function was replaced by EXISTS().
[UPDATE 1]
This version of your query should work:
MATCH p=(n:origin)-[r]->()
WHERE TYPE(r) > '2015-01'
RETURN AVG(r.amount) as totalamout;
However, it is a bad idea to give your relationships different types based on a date. It is better to just use a date property.
[UPDATE 2]
If you changed your data model to add a date property to your relationships (to which I will give the type FOO), then the following query will find the average amount, per p, of all the relationships whose date is after 2015-01 (assuming that all your dates follow the same strict YYYY-MM pattern):
MATCH p=(n:origin)-[r:FOO]->()
WHERE r.date > '2015-01'
RETURN p, AVG(r.amount) as avg_amout;
I'd like to make a cypher query that generates a specific json output. Part of this output includes an object with a dynamic amount of keys relative to the children of a parent node:
{
...
"parent_keystring" : {
child_node_one.name : child_node_one.foo
child_node_two.name : child_node_two.foo
child_node_three.name : child_node_three.foo
child_node_four.name : child_node_four.foo
child_node_five.name : child_node_five.foo
}
}
I've tried to create a cypher query but I do not believe I am close to achieving the desired output mentioned above:
MATCH (n)-[relone:SPECIFIC_RELATIONSHIP]->(child_node)
WHERE n.id='839930493049039430'
RETURN n.id AS id,
n.name AS name,
labels(n)[0] AS type,
{
COLLECT({
child.name : children.foo
}) AS rel_two_representation
} AS parent_keystring
I had planned for children.foo to be a count of how many occurrences of each particular relationship/child of the parent. Is there a way to make use of the reduce function? Where a report would generate based on analyzing the array proposed below? ie report would be a json object where each key is a distinct RELATIONSHIP and the property value would be the amount of times that relationship stems from the parent node?
Thank you greatly in advance for guidance you can offer.
I'm not sure that Cypher will let you use a variable to determine an object's key. Would using an Array work for you?
COLLECT([child.name, children.foo]) AS rel_two_representation
I think, Neo4j Server API output by itself should be considered as any database output (like MySQL). Even if it is possible to achieve, with default functionality, desired output - it is not natural way for database.
Probably you should look into creating your own server plugin. This allows you to implement any custom logic, with desired output.