Kantu (Selenium): How to push Element to existing Array - kantu

How to add a Element to existing Array in Kantu
i do the following code, but name2, length2, namesContent2 are not as expected
{
"Name": "testArrayPush",
"CreationDate": "2019-8-28",
"Commands": [
{
"Command": "storeEval",
"Target": "new Array ('cat','dog','fish','dog','🐟','frog','dog','horse','??elephant')",
"Value": "names"
},
{
"Command": "storeEval",
"Target": "storedVars['names'].length",
"Value": "length"
},
{
"Command": "storeEval",
"Target": "storedVars['names']",
"Value": "namesContent"
},
{
"Command": "echo",
"Target": "array names = ${namesContent}",
"Value": ""
},
{
"Command": "echo",
"Target": "array length = ${length}",
"Value": ""
},
{
"Command": "storeEval",
"Target": "[storedVars['names'],'Thomas']",
"Value": "names2"
},
{
"Command": "storeEval",
"Target": "storedVars['names2'].length",
"Value": "length2"
},
{
"Command": "storeEval",
"Target": "storedVars['names2']",
"Value": "namesContent2"
},
{
"Command": "echo",
"Target": "array names2 = ${namesContent2}",
"Value": ""
},
{
"Command": "echo",
"Target": "array length2 = ${length2}",
"Value": ""
}
]
}
this ist the output, but i expected a new array of length 10
how to do this?
[status]
Playing macro testArrayPush
[info]
Executing: | storeEval | new Array ('cat','dog','fish','dog','🐟','frog','dog','horse','??elephant') | names |
[info]
Executing: | storeEval | storedVars['names'].length | length |
[info]
Executing: | storeEval | storedVars['names'] | namesContent |
[info]
Executing: | echo | array names = ${namesContent} | |
[echo]
array names = cat,dog,fish,dog,🐟,frog,dog,horse,??elephant
[info]
Executing: | echo | array length = ${length} | |
[echo]
array length = 9
[info]
Executing: | storeEval | [storedVars['names'],'Thomas'] | names2 |
[info]
Executing: | storeEval | storedVars['names2'].length | length2 |
[info]
Executing: | storeEval | storedVars['names2'] | namesContent2 |
[info]
Executing: | echo | array names2 = ${namesContent2} | |
[echo]
array names2 = cat,dog,fish,dog,🐟,frog,dog,horse,??elephant,Thomas
[info]
Executing: | echo | array length2 = ${length2} | |
[echo]
array length2 = 2
[info]
Macro completed (Runtime 5.47s)
the first array is ok, it has a length of 9 Elements
i want to add several new Elements to this existing array (and later in the code loop over them.
But the second array has only two elements, the first element is the old array and the second element is the newly added Element
How to to this correctly?

If you want to keep the original array unchanged, you could use concat to push the new element to a copy of the old array.
So
{
"Command": "storeEval",
"Target": "[storedVars['names'],'Thomas']",
"Value": "names2"
}
turns to
{
"Command": "storeEval",
"Target": "storedVars['names'].concat('Thomas')",
"Value": "names2"
},
Output:
[info]
Executing: | storeEval | storedVars['names'].concat('Thomas') | names2 |
[info]
Executing: | storeEval | storedVars['names2'].length | length2 |
[info]
Executing: | storeEval | storedVars['names2'] | namesContent2 |
[info]
Executing: | echo | array names2 = ${namesContent2} | |
[echo]
array names2 = cat,dog,fish,dog,🐟,frog,dog,horse,??elephant,Thomas
[info]
Executing: | echo | array length2 = ${length2} | |
[echo]
array length2 = 10

Related

Array.map for Rego or how to combine RBAC with api routes

I would like to define permissions in JSON data such as:
"permissions": [
{
"resource": ["users", ":uid", "salary"],
"action": "GET"
}
]
Now when evaluating, I want to replace :uid with input.subject . How would I go about this? Is there something like Array.prototype.map() in Rego?
PS: I know I can do this, for example.
allow {
input.action = ["GET", "POST"][_]
input.resource = ["users", uid, "salary"]
input.subject = uid
}
But instead of spelling out each path in the policy, I would like to use RBAC (roles + permissions) so that I can pass those API endpoint permissions as JSON data. Is it possible?
You can certainly write a policy that scans over all of the permissions and checks if there's a match. Here's a simple (but complete) example:
package play
permissions = [
{
"resource": "/users/:uid/salary",
"action": "GET"
},
{
"resource": "/metrics",
"action": "GET"
}
]
default allow = false
allow {
some p
matching_permission[p]
}
matching_permission[p] {
some p
matching_permission_action[p]
matching_permission_resource[p]
}
matching_permission_action[p] {
some p
permissions[p].action == input.action
}
matching_permission_resource[p] {
some p
path := replace(permissions[p].resource, ":uid", input.subject)
path == input.resource
}
The downside of this approach is that each evaluation has to, in the worse case, scan over all permissions. As more permissions are added, evaluation will take longer. Depending on how large the permission set can get, this might not satisfy the latency requirements.
The typical answer to this is to use partial evaluation to pre-evaluate the permissions data and generate a rule set that can be evaluated in constant-time due to rule indexing. This approach is covered on the Policy Performance page. For example, if you run partial evaluation on this policy, this is the output:
$ opa eval -d play.rego -f pretty 'data.play.allow' -p --disable-inlining data.play.allow
+-----------+-------------------------------------------------------------------------+
| Query 1 | data.partial.play.allow |
+-----------+-------------------------------------------------------------------------+
| Support 1 | package partial.play |
| | |
| | allow { |
| | "GET" = input.action |
| | |
| | replace("/users/:uid/salary", ":uid", input.subject) = input.resource |
| | } |
| | |
| | allow { |
| | "POST" = input.action |
| | |
| | replace("/metrics", ":uid", input.subject) = input.resource |
| | } |
+-----------+-------------------------------------------------------------------------+
In this case, the equality statements would be recognized by the rule indexer. However, the indexer will not be able to efficiently index the ... = input.resource statements due to the replace() call.
Part of the challenge is that this policy is not pure RBAC...it's an attribute-based policy that encodes an equality check (between a path segment and the subject) into the permission data. If we restructure the permission data a little bit, we can workaround this:
package play2
permissions = [
{
"owner": "subject",
"resource": "salary",
"action": "GET"
},
{
"resource": "metrics",
"action": "GET"
},
]
allow {
some p
matching_permission[p]
}
matching_permission[p] {
some p
matching_permission_action[p]
matching_permission_resource[p]
matching_permission_owner[p]
}
matching_permission_action[p] {
some p
permissions[p].action == input.action
}
matching_permission_resource[p] {
some p
permissions[p].resource == input.resource
}
matching_permission_owner[p] {
some p
permissions[p]
not permissions[p].owner
}
matching_permission_owner[p] {
some p
owner := permissions[p].owner
input.owner = input[owner]
}
This version is quite similar except we have explicitly encoded ownership into the permission model. The "owner" field indicates the resource owner (provided in the input document under the "owner" key) must be equal to the specified input value (in this example, input.subject).
Running partial evaluation on this version yields the following output:
$ opa eval -d play2.rego -f pretty 'data.play2.allow' -p --disable-inlining data.play2.allow
+-----------+-------------------------------+
| Query 1 | data.partial.play2.allow |
+-----------+-------------------------------+
| Support 1 | package partial.play2 |
| | |
| | allow { |
| | "GET" = input.action |
| | |
| | "salary" = input.resource |
| | |
| | input.owner = input.subject |
| | } |
| | |
| | allow { |
| | "GET" = input.action |
| | |
| | "metrics" = input.resource |
| | } |
+-----------+-------------------------------+
All of the conditions on the rule bodies are now recognized by the rule indexer and evaluation latency will scale w/ the number of rules that could potentially match the input. The tradeoff here of course is that whenever the permissions change, partial evaluation has to be re-executed.

Rego object.get with multileve key

is there any way to use object.get with multiple level key..?
My input looks like this:
{
"pipelineParameters" : {
"k8" : {
"NODES" : "1"
},
"ec2": {
"NODES" : "0"
}
}
my data looks like
{
"key": "pipelineParameters.k8.NODES"
}
How can I get value from input based on multilevel key
Sample code
https://play.openpolicyagent.org/p/iR15XnMctP
The object.get function does not support multi-level keys. You could use the walk function for this if you represent the key as an array:
input = {
"pipelineParameters" : {
"k8" : {
"NODES" : "1"
},
"ec2": {
"NODES" : "0"
}
}
}
For example:
> walk(input, [["pipelineParameters", "k8", "NODES"], "1"])
true
> walk(input, [["pipelineParameters", "k8", "NODES"], x])
+-----+
| x |
+-----+
| "1" |
+-----+
> walk(input, [["pipelineParameters", y, "NODES"], x])
+-----+-------+
| x | y |
+-----+-------+
| "1" | "k8" |
| "0" | "ec2" |
+-----+-------+
To convert your key into array you could simply write:
split(key, ".")
For example:
split("pipelineParameters.k8.NODES", ".")
[
"pipelineParameters",
"k8",
"NODES"
]
Putting it all together:
> walk(input, [split("pipelineParameters.k8.NODES", "."), x])
+-----+
| x |
+-----+
| "1" |
+-----+

Find key value pair in PostgreSQL's HSTORE

Given a table games and column identifiers, whose type is HSTORE:
| id | name | identifiers |
|----|------------------|------------------------------------|
| 1 | Metal Gear | { sku: 109127072, ean: 512312342 } |
| 2 | Theme Hospital | { sku: 399348341 } |
| 3 | Final Fantasy | { ean: 109127072, upc: 999284928 } |
| 4 | Age of Mythology | { tbp: 'a998fa31'} |
| 5 | Starcraft II | { sku: 892937742, upc: 002399488 } |
How can I find if a given set of key-value pairs has at least one match in the database?
For example, if I supply this array: [ {sku: 109127072 }, { upc: 999284928 } ], I should see:
| id | name | identifiers |
|----|----------------|------------------------------------|
| 1 | Metal Gear | { sku: 109127072, ean: 512312342 } |
| 3 | Final Fantasy | { ean: 109127072, upc: 999284928 } |
For rails 5 you, think, should try using or operator like:
h = { sku: 109127072, upc: 999284928 }
rela = Person.where("identifiers -> ? = ?", h.keys.first, h[h.keys.first])
h.keys[1..-1].reduce(rela) {|rela, key| rela.or("identifiers -> ? = ?", key, h[key]) }
# => relation with ored-arguments
for non 5-th rails you shall use arel as descibed here.

Aerospike: lua udf always returns an empty result even if udf return stream without any filtering, etc

Can not understand why aggregateQuery always returns an empty result. Tried to test in aql, the same problem: 0 rows in set.
Indexes are all there.
aql> show indexes
+---------------+-------------+-----------+------------+-------+------------------------------+-------------+------------+-----------+
| ns | bin | indextype | set | state | indexname | path | sync_state | type |
+---------------+-------------+-----------+------------+-------+------------------------------+-------------+------------+-----------+
| "test" | "name" | "NONE" | "profiles" | "RW" | "inx_test_name" | "name" | "synced" | "STRING" |
| "test" | "age" | "NONE" | "profiles" | "RW" | "inx_test_age" | "age" | "synced" | "NUMERIC" |
aql> select * from test.profiles
+---------+-----+
| name | age |
+---------+-----+
| "Sally" | 19 |
| 20 | |
| 22 | |
| 28 | |
| "Ann" | 22 |
| "Bob" | 22 |
| "Tammy" | 22 |
| "Ricky" | 20 |
| 22 | |
| 19 | |
+---------+-----+
10 rows in set (0.026 secs)
aql> AGGREGATE mystream.avg_age() ON test.profiles WHERE age BETWEEN 20 and 29
0 rows in set (0.004 secs)
It seems that you are trying the example here.
There are two problems about the udf script. I paste the code of the lua script :
function avg_age(stream)
local function female(rec)
return rec.gender == "F"
end
local function name_age(rec)
return map{ name=rec.name, age=rec.age }
end
local function eldest(p1, p2)
if p1.age > p2.age then
return p1
else
return p2
end
end
return stream : filter(female) : map(name_age) : reduce(eldest)
end
First, there is no bin named 'gender' in your set, so you got 0 rows after aggregateQuery.
Second, this script isn't doing exactly what the function name 'avg_age' means, it just return the eldest record with name and age.
I paste my code bellow, it just replace the reduce func, and alert the map and filter func to meat the demand. You can just skip the filter process.
function avg_age(stream)
count = 0
sum = 0
local function female(rec)
return true
end
local function name_age(rec)
return rec.age
end
local function avg(p1, p2)
count = count + 1
sum = sum + p2
return sum / count
end
return stream : filter(female) : map(name_age) : reduce(avg)
end
The output looks like bellow :
AGGREGATE mystream.avg_age() ON test.avgage WHERE age BETWEEN 20 and 29
+---------+
| avg_age |
+---------+
| 22 |
+---------+
1 row in set (0.001 secs)

Ruby parsing Array (Special Case)

I am executing a query and getting the following data from the database in an array (MySql2 type object):
+-----------+---------------+---------------+------+------+---------------+
| build | platform_type | category_name | pass | fail | indeterminate |
+-----------+---------------+---------------+------+------+---------------+
| 10.0.1.50 | 8k | UMTS | 10 | 2 | 5 |
| 10.0.1.50 | 8k | UMTS | 10 | 2 | 5 |
| 10.0.1.50 | 8k | IP | 10 | 2 | 5 |
| 10.0.1.50 | 8k | IP | 14 | 1 | 3 |
| 10.0.1.50 | 9k | IP | 14 | 1 | 3 |
| 10.0.1.50 | 9k | IP | 12 | 1 | 1 |
| 10.0.1.50 | 9k | UMTS | 12 | 1 | 1 |
| 10.0.1.50 | 9k | UMTS | 12 | 1 | 1 |
| 10.0.1.50 | 9k | UMTS | 12 | 1 | 1 |
| 10.0.1.50 | 9k | Stability | 9 | 4 | 0 |
| 10.0.1.50 | 9k | Stability | 15 | 1 | 0 |
I want to display it on my UI in a table, something like this :
+-----------+---------------+---------------+------+------+---------------+
| build | platform_type | category_name | pass | fail | indeterminate |
+-----------+---------------+---------------+------+------+---------------+
| | | UMTS | 20 | 4 | 10 |
| | 8k |---------------------------------------------|
| | | IP | 24 | 3 | 8 |
| |---------------|---------------------------------------------|
| 10.0.1.50 | | IP | 26 | 2 | 4 |
| | |---------------------------------------------|
| | 9k | UMTS | 36 | 3 | 3 |
| | |---------------------------------------------|
| | | Stability | 24 | 5 | 0 |
---------------------------------------------------------------------------
I did try using hash to find unique platform types for the build. But as I am very new to ruby, I am having trouble using the hash properly. I would appreciate if someone can help me parse the data.
Assuming you have an array of arrays:
#data = sql_results.group_by(&:first).map do |b, bl|   
[b, bl.group_by(&:second).map{|p, pl| [p, pl.map{|r| r[2..-1]}] }.sort_by(&:first)]
end.sort_by(&:first)
Here is how to break down the logic.
group the rows by first column. This will return a hash with key as the first column name and value as an array of rows.
group each build list by 2nd column(platform type). Each group should contain the array of col values from 3 to the last col.
sort the platform lists by platform type
sort the build lists by build name
The resultant structure would like this:
[
[
"10.0.1.50", [
[
"8k", [
["UMTS", 20, 4, 10],
["IP", 24, 3, 8]
]
],
[
"9k", [
["IP", 26, 2, 4],
["UMTS", 36, 3, 3],
["UMTS", 24, 5, 0]
]
]
]
]
]
You can use this in your view layout example:
%table
%tr
- %w(build platform_type category_name pass fail indeterminate).each do |name|
%th=name
- #data.each do |build, build_list|
%tr
%td=build
%td{:colspan=4}
%table
- build_list.each do |build, platform_list|
%tr
%td=build
%td{:colspan=3}
%table
- platform_list.each do |row|
%tr
- row.each do |attr|
%td=attr
If you use an AR model here is what you do:
class Build < ActiveRecord::Base
def self.builds_by_platform
reply = Hash.new{|h, k| h[k] = Hash.new{|h, k| h[k] = []}}
Build.order("build ASC, platform_type ASC").find_each do |row|
reply[row.build][row.platform_type] << row
end
reply.map{|b, bh| [b, bh.sort_by(&:first)}.sort_by(&:first)
end
end
In your controller you can access the normalized variable as:
#report _list = Build.builds_by_platform
You can use the #report _list variable for rendering the table.

Resources