How can I convert String Time into Integer in Neo4j. I have been trying for quite a long time, but there does not seem to be any clear solution for it. Only solution, I found is to add a new property to my node while loading it from CSV. I don't want to do this.
I have Time in the following format:
"18:11:00"
and I want to do some subtraction on them.
I tried doing the following but to no avail:
match (st1:Stoptime)-[r:PRECEDES]->(st2:Stoptime)
return st1.arrival_time, toInt(st1.arrival_time)
limit 10
But I get following output:
"18:11:00" null
You can install APOC procedures and do it using the function apoc.date.parse:
return apoc.date.parse('18:11:00','s','HH:mm:ss')
Running this example the output will be:
╒════════════════════════════════════════════╕
│"apoc.date.parse("18:11:00",'s','HH:mm:ss')"│
╞════════════════════════════════════════════╡
│65460 │
└────────────────────────────────────────────┘
The first parameter is the date/time to be parsed. The second parameter is the target time unit. In this case I have specified seconds (s). The third parameter indicates the date/time format of the first parameter.
Note: remember to install APOC procedures according the version of Neo4j you are using. Take a look in the version compatibility matrix.
Related
I'm trying to get a simple date-time comparison to work, but the query doesn't return any results.
The query is
MATCH (n:Event) WHERE n.start_datetime > datetime("2019-06-01T18:40:32.142+0100") RETURN n.start_datetime
According to this documentation page, this type of comparisons should work. I've also tried creating the datetime object explicitly, for instance with datetime({year: 2019, month: 7}).
I've checked that the start_datetime is in fact well formatted, by checking if the values start_datetime.year for example was correct, and couldn't find any error.
Given that all the records in the database are from 2021, the above query should return every event, yet is returning nothing.
Doing the query using only the year comparison instead of doing full datetime comparison works:
MATCH (n:Event) WHERE n.start_datetime.year > datetime("2019-06-01T18:40:32.142+0100").year RETURN n.start_datetime
Double check the data type of start_datetime. It can be either in epoch seconds or epoch milliseconds. You need to convert the epoch format to datetime, so that both are on the same data type. The reason that your 2nd query works (.year) is because .year returns an integer value.
Run below to get samples:
MATCH (n:Event)
RETURN distinct n.start_datetime LIMIT 5
Then if you see that it is 10 digits then it is in epochSeconds. If yes, then run below query:
MATCH (n:Event)
WHERE n.start_datetime is not null
AND datetime({epochSeconds: n.start_datetime}) > datetime("2019-06-01T18:40:32.142+0100")
RETURN n.start_datetime
LIMIT 25
It turns out the error was due to the timezone. Neo4j had saved the properties as LocalDateTime, which apparently can't be compared to ZonedDateTime.
I used py2neo for most of the nodes management, and the solution was to give a specific timezone to the python property. This was done (in my case) using:
datetime.datetime.fromtimestamp(kwargs["end"], pytz.UTC)
After that, I was able to do the comparisons.
Hopes this saves a couple of hours to future developers.
I upgraded a Neo4J v3.3 to v3.4 to try out the new spatial and temporal functions.
I'm trying very simple queries. Once with the date function and one without. The results are different.
match (r:Model) where r.open_date>"2018-04-26" return count(r);
Result is 19283.
match (r:Model) where r.open_date>date("2018-04-26") return count(r);
Result is 0.
What is the way to use the new functions?
[EDITED]
The new temporal types, like Date and Duration, are really special types, and it does not make sense to compare them directly to strings or numbers.
Assuming r.open_date has the right format, this should work:
MATCH (r:Model)
WHERE DATE(r.open_date) > DATE("2018-04-26")
RETURN
Also, the the following query may be more performant (since a second DATE object does not need to be constructed):
MATCH (r:Model)
WHERE TOSTRING(DATE(r.open_date)) > "2018-04-26"
RETURN
I am trying to load json file of size about 700k. But it is showing me the heap memory out of space error.
My query is as below:
WITH "file:///Users//arundhathi.d//Documents//Neo4j//default.graphdb//import//tjson.json" as url
call apoc.load.json(url) yield value as article return article
Like in csv I tried to use USING PERIODIC COMMIT 1000 with json. But I am not allowed to use with loading json.
How to load bulk json data?.
You can also convert JSON into CSV files using jq - a uber fast json converter. https://stedolan.github.io/jq/tutorial/
This is the recommended way according to: https://neo4j.com/blog/bulk-data-import-neo4j-3-0/
If you have many files, write a python program or similar that iterates through the length of files calling:
os.system("cat file{}.json | jq '. [.entity1, .entity2, .entity3] | #csv' >> concatenatedCSV.csv".format(num))
or in Go:
exec.Command("cat file"+num+".json | jq '. [.entity1, .entity2, .entity3] | #csv' >> concatenatedCSV.csv")
I recently did this for about 700GB of JSON files. It takes some thought to get the csv files in the right format, but if you follow the tutorial on jq you'll pickup how to do it. Additionally, check out how the headers need to be and what not here: https://neo4j.com/docs/operations-manual/current/tools/import/
It took about a day to convert it all, but given the transaction overhead of using apoc, and the ability to reimport at anytime once the files are in the format it is worth it in the long run.
apoc.load.json now supports a json-path as a second parameter.
To get the first 1000 JSON objects from the array in the file, try this:
WITH "file:///path_to_file.json" as url
CALL apoc.load.json(url, '[0:1000]') YIELD value AS article
RETURN article;
The [0:1000] syntax specifies a range of array indices, and the second number is exclusive (so, in this example, the last index in the range is 999).
The above should at least work in neo4j 3.1.3 (with apoc release 3.1.3.6). Note also that the Desktop versions of neo4j (installed via the Windows and OSX installers) have a new requirement concerning where to put plugins like apoc in order to import local files.
When using the logging commands (documentation here), there are many string-only properties, but there are also several GUID and time properties especially for the task.logdetail command.
What format are these properties supposed to be when written to the output? i.e. Should GUIDs be included in "N" (no hyphens) or "D" (hyphens), and should time values include dates?
You just need to specify the value of GUID.
》》should time values include dates?
Both datetime include time value are working fine, for example: 2016/11/28; 2016/11/28 3:00:05; 11/28/2016; 11/28/2016 4:00:10
Simple code:
Write-Host "##vso[task.logdetail id=a804d160-f69f-4e8a-bdd2-0076d716a01f;name=project1;type=build;order=1;starttime=2016/11/27]create new timeline record"
Write-Host "##vso[task.logdetail id=a804d160-f69f-4e8a-bdd2-0076d716a01f;progress=100;state= Completed;finishtime=2016/11/29]update"
Result
Why executing this query over SQLDeveloper to connect to my database:
select to_timestamp_tz('05/22/2016 10:18:01 PDT', 'MM/DD/YYYY HH24:MI:SS TZD') from dual;
I get the following error:
ORA-01857: "not a valid time zone"
01857. 00000 - "not a valid time zone"
*Cause:
*Action:
But, I'm able to execute the query without any error directly from sqlplus on the host where the database is located, getting the expected result:
TO_TIMESTAMP_TZ('05/22/201610:18:01PDT','MM/DD/YYYYHH24:MI:SSTZD')
---------------------------------------------------------------------------
22-MAY-16 10.18.01.000000000 AM -07:00
So, I'm trying to figure out if I'm doing something incorrectly. I have read that error could be cause because of multiple tzabbrev for a timezone, but this does not explains why on sqlplus runs the query correctly, since I can see the multiple tzabbrev for different time regions on both host and SQLDeveloper (query from v$timezone_names).
The real issue is that our application uses this query, so we notice that this issue reproduces sometimes, even if the application is deploy on the same host as the database.
I add 2 new lines to sqldeveloper\sqldeveloper\bin\sqldeveloper.conf
AddVMOption -Doracle.jdbc.timezoneAsRegion=false
AddVMOption -Duser.timezone=CET
and this fix the problem.
Updated
To eliminate the ambiguity of boundary cases when the time switches from Standard Time to Daylight Saving Time, use both the TZR format element and the corresponding TZD format element
To make your query work without changing anything from the JVM configuration, you should provide the timezone region
select to_timestamp_tz('05/22/2016 10:18:01 PDT US/Pacific', 'MM/DD/YYYY HH24:MI:SS TZD TZR') from dual;
Because you didn't provide the timezone region, it will get the default one. Let's look at the first parameter 'oracle.jdbc.timezoneAsRegion'. This is defined by the jdbc driver as follow:
CONNECTION_PROPERTY_TIMEZONE_AS_REGION
Use JVM default timezone as specified rather than convert to a GMT offset. Default is true.
So without defining this property, you force your query to use the default timezone region defined by property 'user.timezone'. But actually you haven't set it yet. So the solution is either you set the property 'oracle.jdbc.timezoneAsRegion' to false (and the database current session time zone region will be used) or provide the it implicitly with 'user.timezone' property