I have this filter working well but after a new use case where the property "p" can be null or a empty array [], the parser stop to evaluate the expression.
".p[]?.product.productId" the issue is here, when p is null or an empty array [].
When I have the p property like this, it works well. [{}] or [{"id":123}]
I'm breaking the filter in lines to make it easy to understand.
.p as $p
| .p[]?.product.productId as $pa
| .io[]
| select(.product.productId == ($pa) or .description == "product description x")
| .product.productId as $pid
| {"offerId": .offerId,
"description": .description,
"required":
"($p[] | select(.product.productId == $pid) | .required)",
"applied": false,
"amount": (if .prices | length == 0
then 0
elif .prices[0].amount != null
then .prices[0].amount
else .prices[0].amountPercentage
end)}
Input:
{
"p": null,
"io": [{
"offerId": 5593,
"description": "product description x",
"product": {
"productId": 393,
"description": "product description x 2",
"type": "Insurance"
},
"prices": [
{
"amount": null,
"amountPercentage": 4.13999987,
"status": "On"
}
]
}]
}
All I want is to be able to ignore the P when it is null or [].
*I'm aware about this literal expression "($p[] | select(.product.productId == $pid) | .required)"
jqplay.org/s/wYwKUFM2XR
Regards
E? is like try E catch empty, whereas what you seem to want is either try E catch null or perhapsE? // null
.p[]? is not the same as .p?[] or .p?[]?:
$ jq -n '[] | .p[]?'
jq: error (at <unknown>): Cannot index array with string "p"
$ jq -n '[] | .p?[]'
$
$ jq -n '[] | .p?[]?'
$
Specifically, .p[] is like .p | (try .[] catch empty), so there is nothing to stop the .p from raising an exception.
You might like to consider using try explicitly:
$ jq -n '[] | try .p[] catch null'
$
Related
I am trying to create a KQL query where I can filter which locations are inside a geojson using function geo_point_in_polygon
I am trying to first, obtain my polygon with this query:
let polygon = adt_dh_FerrovialTwins_westeurope
| where ModelId == "dtmi:ferrovial:domain:models:v1:acopiotemploraltwin;1"
and Key == "localizacion"
| top 1 by Id
|project (Value);
polygon;
And with this data, trying to do another query to filter how many times a location is inside this polygon, with this query
let polygon = adt_dh_FerrovialTwins_westeurope
| where ModelId == "dtmi:ferrovial:domain:models:v1:acopiotemploraltwin;1"
and Key == "localizacion"
| top 1 by Id
|project (Value);
adt_dh_FerrovialTwins_westeurope
| where ModelId == "dtmi:ferrovial:domain:models:v1:camiondetierrastwin;1"
and Key == "localizacion"
| extend lon=(Value).geometry.coordinates[0], lat= (Value).geometry.coordinates[1]
| project todecimal(lon), todecimal(lat), Id
| where geo_point_in_polygon(lon, lat, polygon)
| summarize count() by Id, hash = geo_point_to_s2cell(lon, lat, 7)
| project geo_s2cell_to_central_point(hash), Id, count_
| render piechart with (kind=map) // map rendering available in Kusto Explorer desktop
It says it needs a dynamic value, but it is already one. I donĀ“t know how to solve this, because if I try using a Polygon variable as string (dynamic) it works ok:
let polygon = dynamic({
"type": "Polygon",
"coordinates": [
[
[
-22.6430674134452,
-69.1258109131277
],
[
-22.6430533208934,
-69.1250474377359
],
[
-22.6453362953948,
-69.1243603098833
],
[
-22.6452658337868,
-69.1264980409803
],
[
-22.6431096910912,
-69.1257803741119
],
[
-22.6430674134452,
-69.1258109131277
]
]
]
});
adt_dh_FerrovialTwins_westeurope
| where ModelId == "dtmi:ferrovial:domain:models:v1:camiondetierrastwin;1"
and Key == "localizacion"
| extend lon=(Value).geometry.coordinates[0], lat= (Value).geometry.coordinates[1]
| project todecimal(lon), todecimal(lat), Id
| where geo_point_in_polygon(lon, lat, polygon)
| summarize count() by Id, hash = geo_point_to_s2cell(lon, lat, 7)
| project geo_s2cell_to_central_point(hash), Id, count_
| render piechart with (kind=map) // map rendering available in Kusto Explorer desktop
Here is a minimal, reproducible example:
let polygon = print dynamic({"type": "Polygon","coordinates": []});
print geo_point_in_polygon(0, 0, polygon)
Fiddle
You shared the IntelliSense error:
A value of type dynamic expected
However, you didn't share the run-time error:
Failed to resolve scalar expression named 'polygon'
As the error suggests, the function expects a scalar as an argument.
Your tabular expression can be converted to scalar using the toscalar() function.
let polygon = toscalar(print dynamic({"type": "Polygon","coordinates": []}));
print geo_point_in_polygon(0, 0, polygon)
Fiddle
I have streams
{
"key": "a",
"value": 1
}
{
"key": "b",
"value": 1
}
{
"key": "c",
"value": 1
}
{
"key": "d",
"value": 1
}
{
"key": "e",
"value": 1
}
And
(true,true,false,false,true)
I want to compare the two one-on-one and only print the object if the corresponding boolean is true.
So I want to output
{
"key": "a",
"value": 1
}
{
"key": "b",
"value": 1
}
{
"key": "e",
"value": 1
}
I tried (https://jqplay.org/s/GGTHEfQ9s3)
filter:
. as $input | foreach (true,true,false,false,true) as $dict ($input; select($dict))
input:
{
"key": "a",
"value": 1
}
{
"key": "b",
"value": 1
}
{
"key": "c",
"value": 1
}
{
"key": "d",
"value": 1
}
{
"key": "e",
"value": 1
}
But I get output:
{"key":"a","value":1}
{"key":"a","value":1}
null
{"key":"b","value":1}
{"key":"b","value":1}
null
{"key":"c","value":1}
{"key":"c","value":1}
null
{"key":"d","value":1}
{"key":"d","value":1}
null
{"key":"e","value":1}
{"key":"e","value":1}
null
Help will be appreciated.
One way would be to read in the streams as arrays, use transpose to match their items, and select by one and output the other:
jq -s '[.,[(true,true,false,false,true)]] | transpose[] | select(.[1])[0]' objects.json
Demo
Another approach would be to read in the streams as arrays, convert the booleans array into those indices where conditions match, and use them to reference into the objects array:
jq -s '.[[(true,true,false,false,true)] | indices(true)[]]' objects.json
Demo
The same approach but using nth to reference into the inputs stream requires more precaution, as the successive consumption of stream inputs demands the provision of relative distances, not absolute positions to nth. A conversion can be implemented by successively checking the position of the next true value using index and a while loop:
jq -n 'nth([true,true,false,false,true] | while(. != []; .[index(true) + 1:]) | index(true) | values; inputs)' objects.json
Demo
One could also use reduce to directly iterate over the boolean values, and just select any appropriate input:
jq -n 'reduce (true,true,false,false,true) as $dict ([]; . + [input | select($dict)]) | .[]' objects.json
Demo
A solution using foreach, like you intended, also would need the -n option to not miss the first item:
jq -n 'foreach (true,true,false,false,true) as $dict (null; input | select($dict))' objects.json
Demo
Unfortunately, each invocation of jq can currently handle at most one external JSON stream. This is not usually an issue unless both streams are very large, so in this answer I'll focus on a solution that scales. In fact, the amount of computer memory required is miniscule no matter how large the streams may be.
For simplicity, let's assume that:
demon.json is a file consisting of a stream of JSON boolean values (i.e., not comma-separated);
object.json is your stream of JSON objects;
the streams have the same length;
we are working in a bash or bash-like environment.
Then we could go with:
paste -d '\t' demon.json <(jq -c . objects.json) | jq -n '
foreach inputs as $boolean (null; input; select($boolean))'
So apart from the startup costs of paste and jq, we basically only need enough memory to hold one of the objects in objects.json at a time. This solution is also very fast.
Of course, if objects.json were already in JSONL (JSON-lines) format, then the first call to jq above would not be necessary.
I need to find all strings, that contain "foo", but exclude this "foo"
grep "foo" | grep -ve "foo" return zero strings
Example input:
"category": "aaaa",
"amount": 0.01208210,
"vout": 0,
"fee": 0.00007523,
"confirmations": 12345,
"blockhash": "12345",
"blockindex": 12345,
"blocktime": 12345,
On output I need just:
0.01208210
I want to read relations from my Json file instead of hard coding them.
for example:
instead of
MERGE (arg1)-[:relation]->(arg2)
I want to have something like:
MERGE (arg1)-[:v.relation]->(arg2).
My Json file is as following:
{
"bacf06771e0f4fc5a8e68c30fc77c9c4": {
"arg1": "the Treasury",
"arg2": "details of the November refunding",
"relation": "will announce",
"id": "bacf06771e0f4fc5a8e68c30fc77c9c4",
"linkedContexts": [
{
"targetID": "948eeebd73564adab7dee5c6f177b3b9",
"classification": "CONTRAST"
}
]
},
"948eeebd73564adab7dee5c6f177b3b9": {
"arg1": "the funding",
"arg2": "",
"relation": "will be delayed",
"id": "948eeebd73564adab7dee5c6f177b3b9",
"linkedContexts": [
{
"targetID": "006a71e51295440fab7a8e8c697d2ba6",
"classification": "CONDITION"
}
]
}
}
I tried:
CALL apoc.load.json("files:///example.json") YIELD value
UNWIND [k IN KEYS(value) | value[k]] AS v
MERGE (arg1:Arg1 {subject:v.arg1})
MERGE (arg2:Arg2 {object:v.arg2})
MERGE (arg1)-[:v.relation]->(arg2)
I got this error:
Neo.ClientError.Statement.SyntaxError: Invalid input '.': expected an identifier character, whitespace, '|', a length specification, a property map or ']' (line 13, column 17 (offset: 444))
"merge (arg1)-[:v.relation]->(arg2) "
^
Currently, it's not possible to create relationships dynamically with Cypher.
You can use apoc's apoc.merge.relationship procedure to create nodes/relationships dynamically.
CALL apoc.load.json("files:///example.json") YIELD value
UNWIND [k IN KEYS(value) | value[k]] AS v
MERGE (arg1:Arg1 {subject:v.arg1})
MERGE (arg2:Arg2 {object:v.arg2})
CALL apoc.merge.relationship(arg1,v.relation,{},{},arg2) YIELD rel
RETURN count(*);
Borrowing an MWE from this question, I have a set of nested dicts of dicts:
{
"type": "A"
"a": "aaa",
"payload": {"another":{"dict":"value", "login":"user1"}},
"actor": {"dict":"value", "login":"user2"}
}
{
"type": "B"
"a": "aaa",
"payload": {"another":{"dict":"value", "login":"user3"}},
"actor": {"dict":"value", "login":"user4"}
}
}
{
"type": "A"
"a": "aaa",
"b": "bbb",
"payload": {"another":{"dict":"value", "login":"user5"}},
"actor": {"dict":"value", "login":"user6"}
}
}
{
"type": "A"
"a": "aaa",
"b": "bbb",
"payload": {"login":"user5"},
"actor": {"login":"user6"}
}
}
For dictionaries that have "type":"A", I want to get the username from the payload dict and the username from actor dict. The same username can appear multiple times. I would like to store a txt file with a list of actor (ID1) and a list of payload (ID2) like this:
ID1 ID2
user2 user1
user6 user5
user6 user5
Right now, I have a start:
zgrep "A" | zgrep -o 'login":"[^"]*"' | zgrep -o 'payload":"[^"]*" > usernames_list.txt
But of course this won't work, because I need to find login within the payload dict and login within the actor dict for each dict of type A.
Any thoughts?
I am assuming you have the payload and actor dictionaries for all entries of type A.
Parse out the user name from the payload entries and redirect them
to a file named payload.txt
Parse out the user name from actor entries and redirect them to a
different file named actor.txt
Use paste command to join the entries and output them the way you want it