Rego object.get with multileve key - open-policy-agent

is there any way to use object.get with multiple level key..?
My input looks like this:
{
"pipelineParameters" : {
"k8" : {
"NODES" : "1"
},
"ec2": {
"NODES" : "0"
}
}
my data looks like
{
"key": "pipelineParameters.k8.NODES"
}
How can I get value from input based on multilevel key
Sample code
https://play.openpolicyagent.org/p/iR15XnMctP

The object.get function does not support multi-level keys. You could use the walk function for this if you represent the key as an array:
input = {
"pipelineParameters" : {
"k8" : {
"NODES" : "1"
},
"ec2": {
"NODES" : "0"
}
}
}
For example:
> walk(input, [["pipelineParameters", "k8", "NODES"], "1"])
true
> walk(input, [["pipelineParameters", "k8", "NODES"], x])
+-----+
| x |
+-----+
| "1" |
+-----+
> walk(input, [["pipelineParameters", y, "NODES"], x])
+-----+-------+
| x | y |
+-----+-------+
| "1" | "k8" |
| "0" | "ec2" |
+-----+-------+
To convert your key into array you could simply write:
split(key, ".")
For example:
split("pipelineParameters.k8.NODES", ".")
[
"pipelineParameters",
"k8",
"NODES"
]
Putting it all together:
> walk(input, [split("pipelineParameters.k8.NODES", "."), x])
+-----+
| x |
+-----+
| "1" |
+-----+

Related

Active Record querying with joins and group by

I'm designing an API to get data from the following scenario :
brands table :
+------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+------------+--------------+------+-----+---------+----------------+
| id | bigint(20) | NO | PRI | NULL | auto_increment |
| name | varchar(255) | YES | | NULL | |
+------------+--------------+------+-----+---------+----------------+
items table :
+---------------------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+---------------------------+--------------+------+-----+---------+----------------+
| id | bigint(20) | NO | PRI | NULL | auto_increment |
| category_id | bigint(20) | YES | MUL | NULL | |
| brand_id | bigint(20) | YES | | NULL | |
+---------------------------+--------------+------+-----+---------+----------------+
item_skus table :
+---------------------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+---------------------------+--------------+------+-----+---------+----------------+
| id | bigint(20) | NO | PRI | NULL | auto_increment |
| item_id | bigint(20) | YES | MUL | NULL | |
| number_of_stock | int(11) | YES | | NULL | |
+---------------------------+--------------+------+-----+---------+----------------+
Item model association with ItemSku and Brand
belongs_to :brand
has_many :skus, class_name: 'ItemSku'
Simply i want the counts of stock available items and all items for each brand.
{
"brandCounts":[
{
"id":7006,
"name":"Brand 01",
"stockAvailableItemCount":50,
"allItemCount":60
},
{
"id":20197,
"name":"Brand 02"
"availableItemCount":150,
"allItemCount":660
}
]
}
Implementation :
brand_counts = []
brand_counts_hash = Hash.new()
items = Item.left_outer_joins(:skus).where(category_id: params[:id]).pluck(:brand_id, :number_of_stock, :item_id)
items.each do |item|
brand_id = item[0]
stock = item[1]
if brand_counts_hash.has_key?(brand_id)
item_count_arry = brand_counts_hash[brand_id]
stock_available_item_count = item_count_arry[0]
all_item_count = item_count_arry[1]
if stock > 0
brand_counts_hash[brand_id] = [stock_available_item_count + 1, all_item_count + 1]
else
brand_counts_hash[brand_id] = [stock_available_item_count, all_item_count + 1]
end
else
stock_available_item_count = 0
all_item_count = 0
if stock > 0
stock_available_item_count += 1
all_item_count += 1
brand_counts_hash[brand_id] = [stock_available_item_count, all_item_count]
else
all_item_count += 1
brand_counts_hash[brand_id] = [stock_available_item_count, all_item_count]
end
end
end
brand_counts_hash.each do |key, value|
stock_available_item_count = value[0]
all_item_count = value[1]
brand_counts << {
id: key,
name: get_brand_name(key),
stock_available_item_count: stock_available_item_count,
all_item_count: all_item_count
}
end
#brand_counts = brand_counts
render 'brands/counts/index', formats: :json
end
def get_brand_name(brand_id)
brand = Brand.find_by(id: brand_id)
brand.name unless brand == nil
end
Is there a way to optimize this further without multiple loops maybe?
Assume your Brand model also has the following association defined
has_many :items
and the final result you want is like
{
"brandCounts":[
{
"id":7006,
"name":"Brand 01",
"stockAvailableItemCount":50,
"allItemCount":60
},
{
"id":20197,
"name":"Brand 02"
"availableItemCount":150,
"allItemCount":660
}
]
}
The following code may not work when you copy and paste to your project. But it demonstrate how this problem can be solved with less code
Brand.includes(items: :skus).all.map do |brand|
{
id: brand.id,
name: brand.name,
stockAvailableItemCount: brand.items.count,
allItemCount: brand.items.map {|item| item.skus.sum(:number_of_stock)}.sum
}
end
if you need json format, just use to_json to the result of above code.

Array.map for Rego or how to combine RBAC with api routes

I would like to define permissions in JSON data such as:
"permissions": [
{
"resource": ["users", ":uid", "salary"],
"action": "GET"
}
]
Now when evaluating, I want to replace :uid with input.subject . How would I go about this? Is there something like Array.prototype.map() in Rego?
PS: I know I can do this, for example.
allow {
input.action = ["GET", "POST"][_]
input.resource = ["users", uid, "salary"]
input.subject = uid
}
But instead of spelling out each path in the policy, I would like to use RBAC (roles + permissions) so that I can pass those API endpoint permissions as JSON data. Is it possible?
You can certainly write a policy that scans over all of the permissions and checks if there's a match. Here's a simple (but complete) example:
package play
permissions = [
{
"resource": "/users/:uid/salary",
"action": "GET"
},
{
"resource": "/metrics",
"action": "GET"
}
]
default allow = false
allow {
some p
matching_permission[p]
}
matching_permission[p] {
some p
matching_permission_action[p]
matching_permission_resource[p]
}
matching_permission_action[p] {
some p
permissions[p].action == input.action
}
matching_permission_resource[p] {
some p
path := replace(permissions[p].resource, ":uid", input.subject)
path == input.resource
}
The downside of this approach is that each evaluation has to, in the worse case, scan over all permissions. As more permissions are added, evaluation will take longer. Depending on how large the permission set can get, this might not satisfy the latency requirements.
The typical answer to this is to use partial evaluation to pre-evaluate the permissions data and generate a rule set that can be evaluated in constant-time due to rule indexing. This approach is covered on the Policy Performance page. For example, if you run partial evaluation on this policy, this is the output:
$ opa eval -d play.rego -f pretty 'data.play.allow' -p --disable-inlining data.play.allow
+-----------+-------------------------------------------------------------------------+
| Query 1 | data.partial.play.allow |
+-----------+-------------------------------------------------------------------------+
| Support 1 | package partial.play |
| | |
| | allow { |
| | "GET" = input.action |
| | |
| | replace("/users/:uid/salary", ":uid", input.subject) = input.resource |
| | } |
| | |
| | allow { |
| | "POST" = input.action |
| | |
| | replace("/metrics", ":uid", input.subject) = input.resource |
| | } |
+-----------+-------------------------------------------------------------------------+
In this case, the equality statements would be recognized by the rule indexer. However, the indexer will not be able to efficiently index the ... = input.resource statements due to the replace() call.
Part of the challenge is that this policy is not pure RBAC...it's an attribute-based policy that encodes an equality check (between a path segment and the subject) into the permission data. If we restructure the permission data a little bit, we can workaround this:
package play2
permissions = [
{
"owner": "subject",
"resource": "salary",
"action": "GET"
},
{
"resource": "metrics",
"action": "GET"
},
]
allow {
some p
matching_permission[p]
}
matching_permission[p] {
some p
matching_permission_action[p]
matching_permission_resource[p]
matching_permission_owner[p]
}
matching_permission_action[p] {
some p
permissions[p].action == input.action
}
matching_permission_resource[p] {
some p
permissions[p].resource == input.resource
}
matching_permission_owner[p] {
some p
permissions[p]
not permissions[p].owner
}
matching_permission_owner[p] {
some p
owner := permissions[p].owner
input.owner = input[owner]
}
This version is quite similar except we have explicitly encoded ownership into the permission model. The "owner" field indicates the resource owner (provided in the input document under the "owner" key) must be equal to the specified input value (in this example, input.subject).
Running partial evaluation on this version yields the following output:
$ opa eval -d play2.rego -f pretty 'data.play2.allow' -p --disable-inlining data.play2.allow
+-----------+-------------------------------+
| Query 1 | data.partial.play2.allow |
+-----------+-------------------------------+
| Support 1 | package partial.play2 |
| | |
| | allow { |
| | "GET" = input.action |
| | |
| | "salary" = input.resource |
| | |
| | input.owner = input.subject |
| | } |
| | |
| | allow { |
| | "GET" = input.action |
| | |
| | "metrics" = input.resource |
| | } |
+-----------+-------------------------------+
All of the conditions on the rule bodies are now recognized by the rule indexer and evaluation latency will scale w/ the number of rules that could potentially match the input. The tradeoff here of course is that whenever the permissions change, partial evaluation has to be re-executed.

Delphi: interpreter for string expression with operators

i want evaluate at runtime some string expression like:
((foo = true) or (bar <> 'test')) and (baz >= 1)
The string are inputted by user. The user can create a rule by coupling a property choised from a set (eg. foo, bar, baz), inputting the target value to evaluate (string, number and boolean) and choising the operator (=, <>, >, <), eg.:
| Id | Property | Operator | Value | Expression |
-------------------------------------------------------------------------------------------
| $1 | foo | = | true | (foo = true) |
-------------------------------------------------------------------------------------------
| $2 | bar | <> | 'test' | (bar <> 'test') |
-------------------------------------------------------------------------------------------
| $3 | baz | >= | 1 | (baz >= 1) |
-------------------------------------------------------------------------------------------
the single rule can be coupled and nested in child/parent rule by choosing an operator like and, or, eg.:
| Id | Property | Operator | Value | Expression |
-------------------------------------------------------------------------------------------
| $1 | foo | = | true | (foo = true) |
-------------------------------------------------------------------------------------------
| $2 | bar | <> | 'test' | (bar <> 'test') |
-------------------------------------------------------------------------------------------
| $3 | baz | >= | 1 | (baz >= 1) |
-------------------------------------------------------------------------------------------
| $4 | $1 | or | $2 | ((foo = true) or (bar <> 'test')) |
-------------------------------------------------------------------------------------------
| $5 | $4 | and | $3 | ((foo = true) or (bar <> 'test')) and (baz >= 1) |
-------------------------------------------------------------------------------------------
in peseudo code, the idea is:
aExpressionEngine := TExpressionEngine.Create;
try
// Adds to the evaluation scope all the properties with the
// inputted value. AddToScope accept string, variant
aExpressionEngine.AddToScope('foo', false);
aExpressionEngine.AddToScope('bar', 'qux');
aExpressionEngine.AddToScope('baz', 10);
// evaluate the expression, the result is always a boolean
Result := aExpressionEngine.eval('(((foo = true) or (bar <> ''test'')) and (baz >= 1))');
finally
aExpressionEngine.free;
end;
in this pseudo code example, the expression to evaluate become (after replacing the properties with the scope value):
(((false = true) or ('qux' <> 'test')) and (10 >= 1)) // TRUE
by googling, i have found a bit of library for evaluate math expression, but nothing for logical condition evaluating.
Has delphi something for evaluate string expression?

Find key value pair in PostgreSQL's HSTORE

Given a table games and column identifiers, whose type is HSTORE:
| id | name | identifiers |
|----|------------------|------------------------------------|
| 1 | Metal Gear | { sku: 109127072, ean: 512312342 } |
| 2 | Theme Hospital | { sku: 399348341 } |
| 3 | Final Fantasy | { ean: 109127072, upc: 999284928 } |
| 4 | Age of Mythology | { tbp: 'a998fa31'} |
| 5 | Starcraft II | { sku: 892937742, upc: 002399488 } |
How can I find if a given set of key-value pairs has at least one match in the database?
For example, if I supply this array: [ {sku: 109127072 }, { upc: 999284928 } ], I should see:
| id | name | identifiers |
|----|----------------|------------------------------------|
| 1 | Metal Gear | { sku: 109127072, ean: 512312342 } |
| 3 | Final Fantasy | { ean: 109127072, upc: 999284928 } |
For rails 5 you, think, should try using or operator like:
h = { sku: 109127072, upc: 999284928 }
rela = Person.where("identifiers -> ? = ?", h.keys.first, h[h.keys.first])
h.keys[1..-1].reduce(rela) {|rela, key| rela.or("identifiers -> ? = ?", key, h[key]) }
# => relation with ored-arguments
for non 5-th rails you shall use arel as descibed here.

Aerospike: lua udf always returns an empty result even if udf return stream without any filtering, etc

Can not understand why aggregateQuery always returns an empty result. Tried to test in aql, the same problem: 0 rows in set.
Indexes are all there.
aql> show indexes
+---------------+-------------+-----------+------------+-------+------------------------------+-------------+------------+-----------+
| ns | bin | indextype | set | state | indexname | path | sync_state | type |
+---------------+-------------+-----------+------------+-------+------------------------------+-------------+------------+-----------+
| "test" | "name" | "NONE" | "profiles" | "RW" | "inx_test_name" | "name" | "synced" | "STRING" |
| "test" | "age" | "NONE" | "profiles" | "RW" | "inx_test_age" | "age" | "synced" | "NUMERIC" |
aql> select * from test.profiles
+---------+-----+
| name | age |
+---------+-----+
| "Sally" | 19 |
| 20 | |
| 22 | |
| 28 | |
| "Ann" | 22 |
| "Bob" | 22 |
| "Tammy" | 22 |
| "Ricky" | 20 |
| 22 | |
| 19 | |
+---------+-----+
10 rows in set (0.026 secs)
aql> AGGREGATE mystream.avg_age() ON test.profiles WHERE age BETWEEN 20 and 29
0 rows in set (0.004 secs)
It seems that you are trying the example here.
There are two problems about the udf script. I paste the code of the lua script :
function avg_age(stream)
local function female(rec)
return rec.gender == "F"
end
local function name_age(rec)
return map{ name=rec.name, age=rec.age }
end
local function eldest(p1, p2)
if p1.age > p2.age then
return p1
else
return p2
end
end
return stream : filter(female) : map(name_age) : reduce(eldest)
end
First, there is no bin named 'gender' in your set, so you got 0 rows after aggregateQuery.
Second, this script isn't doing exactly what the function name 'avg_age' means, it just return the eldest record with name and age.
I paste my code bellow, it just replace the reduce func, and alert the map and filter func to meat the demand. You can just skip the filter process.
function avg_age(stream)
count = 0
sum = 0
local function female(rec)
return true
end
local function name_age(rec)
return rec.age
end
local function avg(p1, p2)
count = count + 1
sum = sum + p2
return sum / count
end
return stream : filter(female) : map(name_age) : reduce(avg)
end
The output looks like bellow :
AGGREGATE mystream.avg_age() ON test.avgage WHERE age BETWEEN 20 and 29
+---------+
| avg_age |
+---------+
| 22 |
+---------+
1 row in set (0.001 secs)

Resources