What am I missing ! Timetree query not working - neo4j

MATCH startPath = (event:RESERVATION)-[]->(sd:DAY)<-[:`5`]-(sm:MONTH)<-[:`1`]-(sy:YEAR)<-[:`2016`]-(room:ROOM)
WHERE event.reservationId = 44
RETURN startPath
and
MATCH endPath = (event:RESERVATION)-[]->(ed:DAY)<-[:`6`]-(em:MONTH)<-[:`1`]-(ey:YEAR)<-[:`2016`]-(room:ROOM)
WHERE event.reservationId = 44
RETURN endPath
both return valid paths, but when combined as
MATCH startPath = (event:RESERVATION)-[]->(sd:DAY)<-[:`5`]-(sm:MONTH)<-[:`1`]-(sy:YEAR)<-[:`2016`]-(room:ROOM),
endPath = (event:RESERVATION)-[]->(ed:DAY)<-[:`6`]-(em:MONTH)<-[:`1`]-(ey:YEAR)<-[:`2016`]-(room:ROOM)
WHERE event.reservationId = 44
RETURN startPath, endPath
returns no row !
What am I missing ?

The last query requires both startPath and endPath to end with the same exact ROOM node (since they both use the same room identifier). Your data probably has no such node.

Related

Python-Twitter API's function, GetSearch, returns fewer results for a wider radius

When I use GetSearch with the GeoCode, searching in a 1 mile radius returns the most results. This:
results = api.GetSearch(term = "treat", geocode = ("37.781157", "-122.398720", "1mi"), max_id = 1061378028763316224, count = 20)
returns 18 tweets, while this:
results = api.GetSearch(term = "treat", geocode = ("37.781157", "-122.398720", "5mi"), max_id = 1061378028763316224, count = 20)
returns zero.
Is there anything I'm doing wrong here?

cl_http_utility not normalizing my url. Why?

Via an enterpreise service consumer I connect to a webservice, which returns me some data, and also url's.
However, I tried all methods of the mentioned class above and NO METHOD seems to convert the unicode-characters inside my url into the proper readable characters.... ( in this case '=' and ';' ) ...
The only method, which runs properly is "is_valid_url", which returns false, when I pass url's like this:
http://not_publish-workflow-dev.hq.not_publish.com/lc/content/forms/af/not_publish/request-datson-internal/v01/request-datson-internal.html?taskId\u003d105862\u0026wcmmode\u003ddisabled
What am I missing?
It seems that this format is for json values. Usually = and & don't need to be written with the \u prefix. To decode all \u characters, you may use this code:
DATA(json_value) = `http://not_publish-workflow-dev.hq.not_publish.com/lc`
&& `/content/forms/af/not_publish/request-datson-internal/v01`
&& `/request-datson-internal.html?taskId\u003d105862\u0026wcmmode\u003ddisabled`.
FIND ALL OCCURRENCES OF REGEX '\\u....' IN json_value RESULTS DATA(matches).
SORT matches BY offset DESCENDING.
LOOP AT matches ASSIGNING FIELD-SYMBOL(<match>).
DATA hex2 TYPE x LENGTH 2.
hex2 = to_upper( substring( val = json_value+<match>-offset(<match>-length) off = 2 ) ).
DATA(uchar) = cl_abap_conv_in_ce=>uccp( hex2 ).
REPLACE SECTION OFFSET <match>-offset LENGTH <match>-length OF json_value WITH uchar.
ENDLOOP.
ASSERT json_value = `http://not_publish-workflow-dev.hq.not_publish.com/lc`
&& `/content/forms/af/not_publish/request-datson-internal/v01`
&& `/request-datson-internal.html?taskId=105862&wcmmode=disabled`.
I hate to answer my own questions, but anyway, I found an own solution, via manually replacing those unicodes. It is similar to Sandra's idea, but able to convert ANY unicode.
I share it here, just in case, any person might also need it.
DATA: lt_res_tab TYPE match_result_tab.
DATA(valid_url) = url.
FIND ALL OCCURRENCES OF REGEX '\\u.{4}' IN valid_url RESULTS lt_res_tab.
WHILE lines( lt_res_tab ) > 0.
DATA(match) = substring( val = valid_url off = lt_res_tab[ 1 ]-offset len = lt_res_tab[ 1 ]-length ).
DATA(hex_unicode) = to_upper( match+2 ).
DATA(char) = cl_abap_conv_in_ce=>uccp( uccp = hex_unicode ).
valid_url = replace( val = valid_url off = lt_res_tab[ 1 ]-offset len = lt_res_tab[ 1 ]-length with = char ).
FIND ALL OCCURRENCES OF REGEX '\\u.{4}' IN valid_url RESULTS lt_res_tab.
ENDWHILE.
WRITE / url.
WRITE / valid_url.

I need to fix malformed pattern error

I want to replace % signs with a $. I tried doing an escape character () but that didn't work. I am using lua 5.1 and I get a malformed pattern error. (ends in '%') This is bugging me because I don't know how to fix it.
io.write("Search: ") search = io.read()
local query = search:gsub("%", "%25") -- Where I put the % sign.
query = query:gsub("+", "%2B")
query = query:gsub(" ","+")
query = query:gsub("/", "%2F")
query = query:gsub("#", "%23")
query = query:gsub("$", "%24")
query = query:gsub("#", "%40")
query = query:gsub("?", "%3F")
query = query:gsub("{", "%7B")
query = query:gsub("}","%7D")
query = query:gsub("[","%5B")
query = query:gsub("]","%5D")
query = query:gsub(">", "%3E")
query = query:gsub("<", "%3C")
local url = "https://www.google.com/#q=" .. query
print(url)
Output reads:
malformed pattern (ends with '%')
You need to escape % and write %%.
The idiomatic what to do this in Lua is to give a table to gsub:
local reserved="%+/#$#?{}[]><"
local escape={}
for c in reserved:gmatch(".") do
escape[c]=string.format("%%%02X",c:byte())
end
escape[" "]="+"
query = search:gsub(".", escape)

Neo4j cypher avoid negative values

I have a node that has properties based on another node, For example:
MATCH (n:draft {sn:1}),(m:final {sn:1})
SET m.count = m.count - n.count
RETURN m
Seems to work. However what I want to do is set m.count to 0 if n.count > m.count. n.count > m.count results in a negative value and I want to avoid this.
You should be able to do this:
MATCH (n:draft {sn:1}),(m:final {sn:1})
SET m.count = CASE WHEN n.count > m.count THEN 0 ELSE m.count - n.count END
RETURN m

LPeg Increment for Each Match

I'm making a serialization library for Lua, and I'm using LPeg to parse the string. I've got K/V pairs working (with the key explicitly named), but now I'm going to add auto-indexing.
It'll work like so:
#"value"
#"value2"
Will evaluate to
{
[1] = "value"
[2] = "value2"
}
I've already got the value matching working (strings, tables, numbers, and Booleans all work perfectly), so I don't need help with that; what I'm looking for is the indexing. For each match of #[value pattern], it should capture the number of #[value pattern]'s found - in other words, I can match a sequence of values ("#"value1" #"value2") but I don't know how to assign them indexes according to the number of matches. If that's not clear enough, just comment and I'll attempt to explain it better.
Here's something of what my current pattern looks like (using compressed notation):
local process = {} -- Process a captured value
process.number = tonumber
process.string = function(s) return s:sub(2, -2) end -- Strip of opening and closing tags
process.boolean = function(s) if s == "true" then return true else return false end
number = [decimal number, scientific notation] / process.number
string = [double or single quoted string, supports escaped quotation characters] / process.string
boolean = P("true") + "false" / process.boolean
table = [balanced brackets] / [parse the table]
type = number + string + boolean + table
at_notation = (P("#") * whitespace * type) / [creates a table that includes the key and value]
As you can see in the last line of code, I've got a function that does this:
k,v matched in the pattern
-- turns into --
{k, v}
-- which is then added into an "entry table" (I loop through it and add it into the return table)
Based on what you've described so far, you should be able to accomplish this using a simple capture and table capture.
Here's a simplified example I knocked up to illustrate:
lpeg = require 'lpeg'
l = lpeg.locale(lpeg)
whitesp = l.space ^ 0
bool_val = (l.P "true" + "false") / function (s) return s == "true" end
num_val = l.digit ^ 1 / tonumber
string_val = '"' * l.C(l.alnum ^ 1) * '"'
val = bool_val + num_val + string_val
at_notation = l.Ct( (l.P "#" * whitesp * val * whitesp) ^ 0 )
local testdata = [[
#"value1"
#42
# "value2"
#true
]]
local res = l.match(at_notation, testdata)
The match returns a table containing the contents:
{
[1] = "value1",
[2] = 42,
[3] = "value2",
[4] = true
}

Resources